Nov 8 00:45:46.362610 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:45:46.362651 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:45:46.362666 kernel: BIOS-provided physical RAM map: Nov 8 00:45:46.362676 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 8 00:45:46.362685 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 8 00:45:46.362701 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:45:46.362713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 8 00:45:46.362723 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 8 00:45:46.362733 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:45:46.362743 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:45:46.362753 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:45:46.362763 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:45:46.362782 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 8 00:45:46.362796 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:45:46.362815 kernel: NX (Execute Disable) protection: active Nov 8 00:45:46.362826 kernel: APIC: Static calls initialized Nov 8 00:45:46.362837 kernel: SMBIOS 2.8 present. Nov 8 00:45:46.362847 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 8 00:45:46.362858 kernel: Hypervisor detected: KVM Nov 8 00:45:46.362873 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:45:46.362884 kernel: kvm-clock: using sched offset of 8549641835 cycles Nov 8 00:45:46.362896 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:45:46.362907 kernel: tsc: Detected 2000.002 MHz processor Nov 8 00:45:46.362919 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:45:46.362930 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:45:46.362941 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 8 00:45:46.362952 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:45:46.362963 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:45:46.362978 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 8 00:45:46.362989 kernel: Using GB pages for direct mapping Nov 8 00:45:46.363000 kernel: ACPI: Early table checksum verification disabled Nov 8 00:45:46.363011 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 8 00:45:46.363023 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363034 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363045 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363056 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:45:46.363067 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363082 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363093 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363104 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363121 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 8 00:45:46.363133 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 8 00:45:46.363144 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:45:46.363160 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 8 00:45:46.363172 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 8 00:45:46.363189 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 8 00:45:46.363200 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 8 00:45:46.363212 kernel: No NUMA configuration found Nov 8 00:45:46.363223 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 8 00:45:46.363235 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Nov 8 00:45:46.363246 kernel: Zone ranges: Nov 8 00:45:46.363262 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:45:46.363274 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:45:46.363285 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:45:46.363296 kernel: Movable zone start for each node Nov 8 00:45:46.363308 kernel: Early memory node ranges Nov 8 00:45:46.363320 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:45:46.363331 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 8 00:45:46.363347 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:45:46.363359 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 8 00:45:46.363375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:45:46.363387 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:45:46.363399 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 8 00:45:46.363410 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:45:46.363422 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:45:46.363433 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:45:46.363445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:45:46.363457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:45:46.363468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:45:46.363499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:45:46.363510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:45:46.363522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:45:46.363539 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:45:46.363551 kernel: TSC deadline timer available Nov 8 00:45:46.363562 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:45:46.363574 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:45:46.363585 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:45:46.363597 kernel: kvm-guest: setup PV sched yield Nov 8 00:45:46.363613 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:45:46.363624 kernel: Booting paravirtualized kernel on KVM Nov 8 00:45:46.363636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:45:46.363648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:45:46.363659 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:45:46.363671 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:45:46.363682 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:45:46.363694 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:45:46.363705 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:45:46.363723 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:45:46.363734 kernel: random: crng init done Nov 8 00:45:46.363775 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:45:46.363787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:45:46.363799 kernel: Fallback order for Node 0: 0 Nov 8 00:45:46.363810 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 8 00:45:46.363822 kernel: Policy zone: Normal Nov 8 00:45:46.363833 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:45:46.363869 kernel: software IO TLB: area num 2. Nov 8 00:45:46.363881 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 227308K reserved, 0K cma-reserved) Nov 8 00:45:46.363893 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:45:46.363904 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:45:46.363916 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:45:46.363928 kernel: Dynamic Preempt: voluntary Nov 8 00:45:46.363939 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:45:46.363952 kernel: rcu: RCU event tracing is enabled. Nov 8 00:45:46.363964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:45:46.363980 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:45:46.363999 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:45:46.364011 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:45:46.364023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:45:46.364034 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:45:46.364046 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:45:46.364057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:45:46.364069 kernel: Console: colour VGA+ 80x25 Nov 8 00:45:46.364081 kernel: printk: console [tty0] enabled Nov 8 00:45:46.364096 kernel: printk: console [ttyS0] enabled Nov 8 00:45:46.364108 kernel: ACPI: Core revision 20230628 Nov 8 00:45:46.364119 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:45:46.364131 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:45:46.364143 kernel: x2apic enabled Nov 8 00:45:46.364169 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:45:46.364185 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:45:46.364197 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:45:46.364209 kernel: kvm-guest: setup PV IPIs Nov 8 00:45:46.364221 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:45:46.364233 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:45:46.364245 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Nov 8 00:45:46.364262 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:45:46.364274 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:45:46.364286 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:45:46.364298 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:45:46.364317 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:45:46.364334 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:45:46.364346 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:45:46.364358 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:45:46.364370 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:45:46.364383 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:45:46.364396 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:45:46.364408 kernel: active return thunk: srso_alias_return_thunk Nov 8 00:45:46.364420 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:45:46.364437 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 8 00:45:46.364449 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:45:46.364461 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:45:46.366500 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:45:46.366521 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:45:46.366534 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:45:46.366547 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:45:46.366559 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 8 00:45:46.366571 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 8 00:45:46.366590 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:45:46.366602 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:45:46.366615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:45:46.366627 kernel: landlock: Up and running. Nov 8 00:45:46.366639 kernel: SELinux: Initializing. Nov 8 00:45:46.366659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.366671 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.366684 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 8 00:45:46.366696 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:45:46.366713 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:45:46.366725 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:45:46.366737 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:45:46.366749 kernel: ... version: 0 Nov 8 00:45:46.366761 kernel: ... bit width: 48 Nov 8 00:45:46.366773 kernel: ... generic registers: 6 Nov 8 00:45:46.366785 kernel: ... value mask: 0000ffffffffffff Nov 8 00:45:46.366797 kernel: ... max period: 00007fffffffffff Nov 8 00:45:46.366809 kernel: ... fixed-purpose events: 0 Nov 8 00:45:46.366825 kernel: ... event mask: 000000000000003f Nov 8 00:45:46.366838 kernel: signal: max sigframe size: 3376 Nov 8 00:45:46.366850 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:45:46.366863 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:45:46.366875 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:45:46.366887 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:45:46.366899 kernel: .... node #0, CPUs: #1 Nov 8 00:45:46.366910 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:45:46.366923 kernel: smpboot: Max logical packages: 1 Nov 8 00:45:46.366939 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 8 00:45:46.366951 kernel: devtmpfs: initialized Nov 8 00:45:46.366963 kernel: x86/mm: Memory block size: 128MB Nov 8 00:45:46.366976 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:45:46.366988 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:45:46.367000 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:45:46.367012 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:45:46.367024 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:45:46.367036 kernel: audit: type=2000 audit(1762562743.721:1): state=initialized audit_enabled=0 res=1 Nov 8 00:45:46.367200 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:45:46.367212 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:45:46.367224 kernel: cpuidle: using governor menu Nov 8 00:45:46.367236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:45:46.367254 kernel: dca service started, version 1.12.1 Nov 8 00:45:46.367266 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:45:46.367279 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:45:46.367290 kernel: PCI: Using configuration type 1 for base access Nov 8 00:45:46.367303 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:45:46.367320 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:45:46.367332 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:45:46.367344 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:45:46.367356 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:45:46.367368 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:45:46.367381 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:45:46.367398 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:45:46.367411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:45:46.367423 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:45:46.367440 kernel: ACPI: Interpreter enabled Nov 8 00:45:46.367452 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:45:46.367464 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:45:46.367501 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:45:46.367513 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:45:46.367525 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:45:46.367537 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:45:46.367946 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:45:46.368190 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:45:46.368402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:45:46.368419 kernel: PCI host bridge to bus 0000:00 Nov 8 00:45:46.370696 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:45:46.370891 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:45:46.371263 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:45:46.371453 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:45:46.371680 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:45:46.371865 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 8 00:45:46.372052 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:45:46.372337 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:45:46.387974 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:45:46.388194 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:45:46.388410 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:45:46.388634 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:45:46.388836 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:45:46.389072 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:45:46.389277 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:45:46.389499 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:45:46.389706 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:45:46.389928 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:45:46.390131 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:45:46.390330 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:45:46.390635 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:45:46.390868 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:45:46.391115 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:45:46.391349 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:45:46.391648 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:45:46.391863 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 8 00:45:46.392068 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:45:46.392298 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:45:46.392526 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:45:46.392543 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:45:46.392555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:45:46.392573 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:45:46.392586 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:45:46.392606 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:45:46.392618 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:45:46.392630 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:45:46.392642 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:45:46.392653 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:45:46.392666 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:45:46.392677 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:45:46.392694 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:45:46.392706 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:45:46.392718 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:45:46.392730 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:45:46.392742 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:45:46.392754 kernel: iommu: Default domain type: Translated Nov 8 00:45:46.392766 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:45:46.392778 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:45:46.392790 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:45:46.392806 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 8 00:45:46.392818 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 8 00:45:46.393024 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:45:46.393231 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:45:46.393440 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:45:46.393458 kernel: vgaarb: loaded Nov 8 00:45:46.393487 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:45:46.393500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:45:46.393518 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:45:46.393530 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:45:46.393542 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:45:46.393554 kernel: pnp: PnP ACPI init Nov 8 00:45:46.393807 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:45:46.393826 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:45:46.393839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:45:46.393850 kernel: NET: Registered PF_INET protocol family Nov 8 00:45:46.393868 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:45:46.393880 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:45:46.393892 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:45:46.393904 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:45:46.393916 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:45:46.393927 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:45:46.393939 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.393951 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.393963 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:45:46.393980 kernel: NET: Registered PF_XDP protocol family Nov 8 00:45:46.394170 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:45:46.394358 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:45:46.394566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:45:46.394757 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:45:46.394950 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:45:46.395160 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 8 00:45:46.395180 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:45:46.395199 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:45:46.395211 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 8 00:45:46.395222 kernel: Initialise system trusted keyrings Nov 8 00:45:46.395232 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:45:46.395243 kernel: Key type asymmetric registered Nov 8 00:45:46.395253 kernel: Asymmetric key parser 'x509' registered Nov 8 00:45:46.395264 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:45:46.395275 kernel: io scheduler mq-deadline registered Nov 8 00:45:46.395285 kernel: io scheduler kyber registered Nov 8 00:45:46.395301 kernel: io scheduler bfq registered Nov 8 00:45:46.395312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:45:46.395324 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:45:46.395336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:45:46.395348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:45:46.395360 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:45:46.395372 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:45:46.395384 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:45:46.395396 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:45:46.395415 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:45:46.395683 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:45:46.395891 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:45:46.396090 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:45:45 UTC (1762562745) Nov 8 00:45:46.396287 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:45:46.396303 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:45:46.396315 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:45:46.396327 kernel: Segment Routing with IPv6 Nov 8 00:45:46.396346 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:45:46.396359 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:45:46.396371 kernel: Key type dns_resolver registered Nov 8 00:45:46.396383 kernel: IPI shorthand broadcast: enabled Nov 8 00:45:46.396395 kernel: sched_clock: Marking stable (3618007703, 326161483)->(4120768569, -176599383) Nov 8 00:45:46.396407 kernel: registered taskstats version 1 Nov 8 00:45:46.396420 kernel: Loading compiled-in X.509 certificates Nov 8 00:45:46.396432 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:45:46.396444 kernel: Key type .fscrypt registered Nov 8 00:45:46.396460 kernel: Key type fscrypt-provisioning registered Nov 8 00:45:46.396490 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:45:46.396502 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:45:46.396514 kernel: ima: No architecture policies found Nov 8 00:45:46.396527 kernel: clk: Disabling unused clocks Nov 8 00:45:46.396539 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:45:46.396551 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:45:46.396563 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:45:46.396575 kernel: Run /init as init process Nov 8 00:45:46.396592 kernel: with arguments: Nov 8 00:45:46.396604 kernel: /init Nov 8 00:45:46.396616 kernel: with environment: Nov 8 00:45:46.396628 kernel: HOME=/ Nov 8 00:45:46.396640 kernel: TERM=linux Nov 8 00:45:46.396655 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:45:46.396671 systemd[1]: Detected virtualization kvm. Nov 8 00:45:46.396684 systemd[1]: Detected architecture x86-64. Nov 8 00:45:46.396700 systemd[1]: Running in initrd. Nov 8 00:45:46.396713 systemd[1]: No hostname configured, using default hostname. Nov 8 00:45:46.396725 systemd[1]: Hostname set to . Nov 8 00:45:46.396738 systemd[1]: Initializing machine ID from random generator. Nov 8 00:45:46.396751 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:45:46.396764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:45:46.396798 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:45:46.396816 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:45:46.396829 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:45:46.396843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:45:46.396857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:45:46.396872 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:45:46.396890 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:45:46.396903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:45:46.396916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:45:46.396930 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:45:46.396947 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:45:46.396960 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:45:46.396974 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:45:46.396987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:45:46.397000 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:45:46.397018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:45:46.397031 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:45:46.397044 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:45:46.397058 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:45:46.397071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:45:46.397084 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:45:46.397098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:45:46.397111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:45:46.397128 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:45:46.397142 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:45:46.397155 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:45:46.397169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:45:46.397214 systemd-journald[179]: Collecting audit messages is disabled. Nov 8 00:45:46.397248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:45:46.397262 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:45:46.397279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:45:46.397293 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:45:46.397312 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:45:46.397326 systemd-journald[179]: Journal started Nov 8 00:45:46.397352 systemd-journald[179]: Runtime Journal (/run/log/journal/37e5a92bcf4b4c529c9c434f6c64d811) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:45:46.404597 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:45:46.410559 systemd-modules-load[180]: Inserted module 'overlay' Nov 8 00:45:46.515416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:45:46.515441 kernel: Bridge firewalling registered Nov 8 00:45:46.412669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:45:46.457132 systemd-modules-load[180]: Inserted module 'br_netfilter' Nov 8 00:45:46.513237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:45:46.523666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:46.526751 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:45:46.535674 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:45:46.537987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:45:46.553620 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:45:46.556151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:45:46.570198 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:46.572785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:45:46.581653 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:45:46.586641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:45:46.589676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:45:46.594342 dracut-cmdline[212]: dracut-dracut-053 Nov 8 00:45:46.599258 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:45:46.633961 systemd-resolved[213]: Positive Trust Anchors: Nov 8 00:45:46.633978 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:45:46.634006 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:45:46.638553 systemd-resolved[213]: Defaulting to hostname 'linux'. Nov 8 00:45:46.642768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:45:46.646132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:45:46.690527 kernel: SCSI subsystem initialized Nov 8 00:45:46.700499 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:45:46.711509 kernel: iscsi: registered transport (tcp) Nov 8 00:45:46.733142 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:45:46.733183 kernel: QLogic iSCSI HBA Driver Nov 8 00:45:46.782799 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:45:46.789616 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:45:46.816502 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:45:46.816540 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:45:46.819119 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:45:46.863500 kernel: raid6: avx2x4 gen() 32739 MB/s Nov 8 00:45:46.881499 kernel: raid6: avx2x2 gen() 27552 MB/s Nov 8 00:45:46.901888 kernel: raid6: avx2x1 gen() 24647 MB/s Nov 8 00:45:46.901917 kernel: raid6: using algorithm avx2x4 gen() 32739 MB/s Nov 8 00:45:46.922802 kernel: raid6: .... xor() 4497 MB/s, rmw enabled Nov 8 00:45:46.922832 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:45:46.943523 kernel: xor: automatically using best checksumming function avx Nov 8 00:45:47.157698 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:45:47.181313 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:45:47.189770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:45:47.221829 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:45:47.228270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:45:47.239710 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:45:47.262605 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Nov 8 00:45:47.306279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:45:47.312640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:45:47.430669 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:45:47.436691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:45:47.606397 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:45:47.609019 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:45:47.610909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:45:47.612695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:45:47.620663 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:45:47.639793 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:45:47.649538 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:45:47.654890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:45:47.655102 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:47.662721 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:45:47.659305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:45:47.840584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:45:47.840811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:47.854601 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:45:47.841637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:45:47.855922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:45:47.922421 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:45:47.922534 kernel: libata version 3.00 loaded. Nov 8 00:45:47.926579 kernel: AES CTR mode by8 optimization enabled Nov 8 00:45:47.935540 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:45:47.935838 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:45:47.949492 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:45:47.949710 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:45:47.995075 kernel: scsi host1: ahci Nov 8 00:45:47.995413 kernel: scsi host2: ahci Nov 8 00:45:47.995690 kernel: scsi host3: ahci Nov 8 00:45:47.995940 kernel: scsi host4: ahci Nov 8 00:45:47.996163 kernel: scsi host5: ahci Nov 8 00:45:47.996383 kernel: scsi host6: ahci Nov 8 00:45:47.996639 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Nov 8 00:45:47.997654 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Nov 8 00:45:48.000512 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Nov 8 00:45:48.000538 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Nov 8 00:45:48.000563 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Nov 8 00:45:48.000574 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Nov 8 00:45:48.102331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:48.108669 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:45:48.134342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:48.308514 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.318488 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.318525 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.321434 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.324512 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.324596 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.346238 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:45:48.346577 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 8 00:45:48.356946 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:45:48.362047 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:45:48.362455 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:45:48.369374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:45:48.369438 kernel: GPT:9289727 != 167739391 Nov 8 00:45:48.369451 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:45:48.373418 kernel: GPT:9289727 != 167739391 Nov 8 00:45:48.373445 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:45:48.377646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:48.378646 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:45:48.442525 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (461) Nov 8 00:45:48.446235 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:45:48.461202 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (459) Nov 8 00:45:48.456621 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:45:48.476487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:45:48.477679 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:45:48.486168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:45:48.493656 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:45:48.502148 disk-uuid[569]: Primary Header is updated. Nov 8 00:45:48.502148 disk-uuid[569]: Secondary Entries is updated. Nov 8 00:45:48.502148 disk-uuid[569]: Secondary Header is updated. Nov 8 00:45:48.509510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:48.518505 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:49.520863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:49.520923 disk-uuid[570]: The operation has completed successfully. Nov 8 00:45:49.583619 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:45:49.583800 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:45:49.598649 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:45:49.605743 sh[584]: Success Nov 8 00:45:49.636612 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:45:49.688501 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:45:49.704631 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:45:49.706743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:45:49.725845 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:45:49.725888 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:49.728798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:45:49.734281 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:45:49.734304 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:45:49.744491 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:45:49.746431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:45:49.748091 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:45:49.753651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:45:49.756647 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:45:49.779016 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:49.779252 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:49.779265 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:45:49.787827 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:45:49.787858 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:45:49.803849 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:49.803590 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:45:49.811199 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:45:49.819727 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:45:49.952586 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:45:50.223744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:45:50.323340 systemd-networkd[765]: lo: Link UP Nov 8 00:45:50.324717 systemd-networkd[765]: lo: Gained carrier Nov 8 00:45:50.327310 systemd-networkd[765]: Enumeration completed Nov 8 00:45:50.328630 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:45:50.330440 systemd[1]: Reached target network.target - Network. Nov 8 00:45:50.332293 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:45:50.332300 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:45:50.354086 systemd-networkd[765]: eth0: Link UP Nov 8 00:45:50.354883 systemd-networkd[765]: eth0: Gained carrier Nov 8 00:45:50.355785 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:45:50.361196 ignition[700]: Ignition 2.19.0 Nov 8 00:45:50.361212 ignition[700]: Stage: fetch-offline Nov 8 00:45:50.361271 ignition[700]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:50.363824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:45:50.361284 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:50.361460 ignition[700]: parsed url from cmdline: "" Nov 8 00:45:50.361466 ignition[700]: no config URL provided Nov 8 00:45:50.361492 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:45:50.361505 ignition[700]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:45:50.361511 ignition[700]: failed to fetch config: resource requires networking Nov 8 00:45:50.361809 ignition[700]: Ignition finished successfully Nov 8 00:45:50.372658 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:45:50.434372 ignition[773]: Ignition 2.19.0 Nov 8 00:45:50.434387 ignition[773]: Stage: fetch Nov 8 00:45:50.434650 ignition[773]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:50.434670 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:50.434838 ignition[773]: parsed url from cmdline: "" Nov 8 00:45:50.434846 ignition[773]: no config URL provided Nov 8 00:45:50.434857 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:45:50.434874 ignition[773]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:45:50.434921 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 8 00:45:50.435324 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:45:50.635542 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 8 00:45:50.635718 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:45:51.036260 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 8 00:45:51.036460 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:45:51.350579 systemd-networkd[765]: eth0: DHCPv4 address 172.239.57.65/24, gateway 172.239.57.1 acquired from 23.192.120.224 Nov 8 00:45:51.837401 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 8 00:45:51.932772 ignition[773]: PUT result: OK Nov 8 00:45:51.932830 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 8 00:45:52.048379 ignition[773]: GET result: OK Nov 8 00:45:52.048533 ignition[773]: parsing config with SHA512: 29d071ba55dabb99893130a9dd74e9c37500469acea8cc5bc6cc0913fa0921b6292a4f765f3ac573d3ebfb9906d934bdb7bce85231f30e762824af70bcc547eb Nov 8 00:45:52.052555 unknown[773]: fetched base config from "system" Nov 8 00:45:52.052568 unknown[773]: fetched base config from "system" Nov 8 00:45:52.053501 ignition[773]: fetch: fetch complete Nov 8 00:45:52.052574 unknown[773]: fetched user config from "akamai" Nov 8 00:45:52.053508 ignition[773]: fetch: fetch passed Nov 8 00:45:52.053559 ignition[773]: Ignition finished successfully Nov 8 00:45:52.058232 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:45:52.065619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:45:52.082399 ignition[780]: Ignition 2.19.0 Nov 8 00:45:52.082411 ignition[780]: Stage: kargs Nov 8 00:45:52.082642 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:52.082655 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:52.085785 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:45:52.083538 ignition[780]: kargs: kargs passed Nov 8 00:45:52.083589 ignition[780]: Ignition finished successfully Nov 8 00:45:52.092645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:45:52.129382 systemd-networkd[765]: eth0: Gained IPv6LL Nov 8 00:45:52.153514 ignition[786]: Ignition 2.19.0 Nov 8 00:45:52.153530 ignition[786]: Stage: disks Nov 8 00:45:52.153745 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:52.156640 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:45:52.153760 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:52.154710 ignition[786]: disks: disks passed Nov 8 00:45:52.166092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:45:52.154765 ignition[786]: Ignition finished successfully Nov 8 00:45:52.167795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:45:52.169367 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:45:52.171166 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:45:52.172586 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:45:52.179614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:45:52.205818 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:45:52.209967 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:45:52.217589 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:45:52.315872 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:45:52.316848 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:45:52.318440 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:45:52.325586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:45:52.335566 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:45:52.337845 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:45:52.339398 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:45:52.339427 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:45:52.352511 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (802) Nov 8 00:45:52.358090 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:52.358116 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:52.358832 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:45:52.364033 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:45:52.372412 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:45:52.372438 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:45:52.372578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:45:52.376605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:45:52.420505 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:45:52.426681 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:45:52.431779 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:45:52.437884 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:45:52.555618 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:45:52.565581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:45:52.569757 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:45:52.575067 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:45:52.579985 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:52.619697 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:45:52.797965 ignition[916]: INFO : Ignition 2.19.0 Nov 8 00:45:52.797965 ignition[916]: INFO : Stage: mount Nov 8 00:45:52.800992 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:52.800992 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:52.800992 ignition[916]: INFO : mount: mount passed Nov 8 00:45:52.800992 ignition[916]: INFO : Ignition finished successfully Nov 8 00:45:52.804285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:45:52.810584 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:45:53.324618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:45:53.341152 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (927) Nov 8 00:45:53.341218 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:53.346616 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:53.346639 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:45:53.354661 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:45:53.354691 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:45:53.359451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:45:53.415384 ignition[944]: INFO : Ignition 2.19.0 Nov 8 00:45:53.415384 ignition[944]: INFO : Stage: files Nov 8 00:45:53.417243 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:53.417243 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:53.419323 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:45:53.420435 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:45:53.420435 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:45:53.423575 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:45:53.424817 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:45:53.426334 unknown[944]: wrote ssh authorized keys file for user: core Nov 8 00:45:53.427616 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:45:53.428735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:45:53.430143 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:45:53.857134 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:45:54.046809 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:45:54.046809 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:45:54.553020 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:45:56.259649 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:56.261565 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:45:56.261565 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:45:56.270693 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:45:56.270693 ignition[944]: INFO : files: files passed Nov 8 00:45:56.270693 ignition[944]: INFO : Ignition finished successfully Nov 8 00:45:56.267668 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:45:56.284115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:45:56.290642 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:45:56.303148 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:45:56.303349 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:45:56.316998 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:45:56.316998 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:45:56.321569 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:45:56.320000 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:45:56.321784 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:45:56.328632 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:45:56.375631 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:45:56.375805 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:45:56.378273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:45:56.380123 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:45:56.382107 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:45:56.389703 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:45:56.410876 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:45:56.419659 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:45:56.434347 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:45:56.435249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:45:56.437115 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:45:56.439255 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:45:56.439362 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:45:56.441584 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:45:56.442794 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:45:56.444680 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:45:56.446497 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:45:56.448077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:45:56.449897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:45:56.451858 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:45:56.453876 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:45:56.455996 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:45:56.457857 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:45:56.459926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:45:56.460071 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:45:56.462214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:45:56.463376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:45:56.465142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:45:56.466362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:45:56.467223 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:45:56.467354 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:45:56.470040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:45:56.470406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:45:56.471499 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:45:56.471671 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:45:56.478664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:45:56.483852 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:45:56.486647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:45:56.487792 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:45:56.493707 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:45:56.495724 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:45:56.513208 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:45:56.514425 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:45:56.517383 ignition[996]: INFO : Ignition 2.19.0 Nov 8 00:45:56.517383 ignition[996]: INFO : Stage: umount Nov 8 00:45:56.517383 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:56.517383 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:56.527000 ignition[996]: INFO : umount: umount passed Nov 8 00:45:56.527000 ignition[996]: INFO : Ignition finished successfully Nov 8 00:45:56.521399 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:45:56.521960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:45:56.525405 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:45:56.525509 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:45:56.527901 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:45:56.527982 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:45:56.530402 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:45:56.530501 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:45:56.533814 systemd[1]: Stopped target network.target - Network. Nov 8 00:45:56.535299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:45:56.535368 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:45:56.536186 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:45:56.536902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:45:56.540247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:45:56.541097 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:45:56.541813 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:45:56.546597 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:45:56.546664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:45:56.553981 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:45:56.554052 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:45:56.555345 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:45:56.555428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:45:56.557348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:45:56.557435 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:45:56.558423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:45:56.559322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:45:56.566581 systemd-networkd[765]: eth0: DHCPv6 lease lost Nov 8 00:45:56.569094 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:45:56.573870 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:45:56.574097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:45:56.581071 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:45:56.581314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:45:56.590837 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:45:56.591004 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:45:56.602813 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:45:56.604687 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:45:56.604783 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:45:56.606136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:45:56.606231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:45:56.610250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:45:56.610336 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:45:56.612948 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:45:56.613038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:45:56.617065 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:45:56.618659 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:45:56.618840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:45:56.621193 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:45:56.621378 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:45:56.631300 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:45:56.631609 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:45:56.638198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:45:56.638359 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:45:56.641814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:45:56.641877 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:45:56.643729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:45:56.643825 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:45:56.646330 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:45:56.646402 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:45:56.648002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:45:56.648100 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:56.658669 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:45:56.659881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:45:56.659973 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:45:56.660989 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:45:56.661063 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:45:56.662091 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:45:56.662171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:45:56.666552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:45:56.666633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:56.670509 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:45:56.670658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:45:56.674425 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:45:56.674647 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:45:56.678618 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:45:56.686684 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:45:56.698229 systemd[1]: Switching root. Nov 8 00:45:56.733334 systemd-journald[179]: Journal stopped Nov 8 00:45:46.362610 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:45:46.362651 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:45:46.362666 kernel: BIOS-provided physical RAM map: Nov 8 00:45:46.362676 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Nov 8 00:45:46.362685 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Nov 8 00:45:46.362701 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 00:45:46.362713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Nov 8 00:45:46.362723 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Nov 8 00:45:46.362733 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 00:45:46.362743 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 00:45:46.362753 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 00:45:46.362763 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 00:45:46.362782 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Nov 8 00:45:46.362796 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 8 00:45:46.362815 kernel: NX (Execute Disable) protection: active Nov 8 00:45:46.362826 kernel: APIC: Static calls initialized Nov 8 00:45:46.362837 kernel: SMBIOS 2.8 present. Nov 8 00:45:46.362847 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Nov 8 00:45:46.362858 kernel: Hypervisor detected: KVM Nov 8 00:45:46.362873 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 00:45:46.362884 kernel: kvm-clock: using sched offset of 8549641835 cycles Nov 8 00:45:46.362896 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 00:45:46.362907 kernel: tsc: Detected 2000.002 MHz processor Nov 8 00:45:46.362919 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:45:46.362930 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:45:46.362941 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Nov 8 00:45:46.362952 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 00:45:46.362963 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:45:46.362978 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Nov 8 00:45:46.362989 kernel: Using GB pages for direct mapping Nov 8 00:45:46.363000 kernel: ACPI: Early table checksum verification disabled Nov 8 00:45:46.363011 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Nov 8 00:45:46.363023 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363034 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363045 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363056 kernel: ACPI: FACS 0x000000007FFE0000 000040 Nov 8 00:45:46.363067 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363082 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363093 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363104 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:45:46.363121 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Nov 8 00:45:46.363133 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Nov 8 00:45:46.363144 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Nov 8 00:45:46.363160 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Nov 8 00:45:46.363172 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Nov 8 00:45:46.363189 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Nov 8 00:45:46.363200 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Nov 8 00:45:46.363212 kernel: No NUMA configuration found Nov 8 00:45:46.363223 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Nov 8 00:45:46.363235 kernel: NODE_DATA(0) allocated [mem 0x17fff8000-0x17fffdfff] Nov 8 00:45:46.363246 kernel: Zone ranges: Nov 8 00:45:46.363262 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:45:46.363274 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:45:46.363285 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:45:46.363296 kernel: Movable zone start for each node Nov 8 00:45:46.363308 kernel: Early memory node ranges Nov 8 00:45:46.363320 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 00:45:46.363331 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Nov 8 00:45:46.363347 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Nov 8 00:45:46.363359 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Nov 8 00:45:46.363375 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:45:46.363387 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 00:45:46.363399 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Nov 8 00:45:46.363410 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 00:45:46.363422 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 00:45:46.363433 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:45:46.363445 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 00:45:46.363457 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 00:45:46.363468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:45:46.363499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 00:45:46.363510 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 00:45:46.363522 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:45:46.363539 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 8 00:45:46.363551 kernel: TSC deadline timer available Nov 8 00:45:46.363562 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:45:46.363574 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 00:45:46.363585 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 8 00:45:46.363597 kernel: kvm-guest: setup PV sched yield Nov 8 00:45:46.363613 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 00:45:46.363624 kernel: Booting paravirtualized kernel on KVM Nov 8 00:45:46.363636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:45:46.363648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:45:46.363659 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:45:46.363671 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:45:46.363682 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:45:46.363694 kernel: kvm-guest: PV spinlocks enabled Nov 8 00:45:46.363705 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:45:46.363723 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:45:46.363734 kernel: random: crng init done Nov 8 00:45:46.363775 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:45:46.363787 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:45:46.363799 kernel: Fallback order for Node 0: 0 Nov 8 00:45:46.363810 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1031901 Nov 8 00:45:46.363822 kernel: Policy zone: Normal Nov 8 00:45:46.363833 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:45:46.363869 kernel: software IO TLB: area num 2. Nov 8 00:45:46.363881 kernel: Memory: 3966204K/4193772K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 227308K reserved, 0K cma-reserved) Nov 8 00:45:46.363893 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:45:46.363904 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:45:46.363916 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:45:46.363928 kernel: Dynamic Preempt: voluntary Nov 8 00:45:46.363939 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:45:46.363952 kernel: rcu: RCU event tracing is enabled. Nov 8 00:45:46.363964 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:45:46.363980 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:45:46.363999 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:45:46.364011 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:45:46.364023 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:45:46.364034 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:45:46.364046 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Nov 8 00:45:46.364057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:45:46.364069 kernel: Console: colour VGA+ 80x25 Nov 8 00:45:46.364081 kernel: printk: console [tty0] enabled Nov 8 00:45:46.364096 kernel: printk: console [ttyS0] enabled Nov 8 00:45:46.364108 kernel: ACPI: Core revision 20230628 Nov 8 00:45:46.364119 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 8 00:45:46.364131 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:45:46.364143 kernel: x2apic enabled Nov 8 00:45:46.364169 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 00:45:46.364185 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 8 00:45:46.364197 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 8 00:45:46.364209 kernel: kvm-guest: setup PV IPIs Nov 8 00:45:46.364221 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 8 00:45:46.364233 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Nov 8 00:45:46.364245 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000002) Nov 8 00:45:46.364262 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 00:45:46.364274 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 8 00:45:46.364286 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 8 00:45:46.364298 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:45:46.364317 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:45:46.364334 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:45:46.364346 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 00:45:46.364358 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 00:45:46.364370 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 00:45:46.364383 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 8 00:45:46.364396 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 8 00:45:46.364408 kernel: active return thunk: srso_alias_return_thunk Nov 8 00:45:46.364420 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 8 00:45:46.364437 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Nov 8 00:45:46.364449 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:45:46.364461 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:45:46.366500 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:45:46.366521 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:45:46.366534 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Nov 8 00:45:46.366547 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:45:46.366559 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Nov 8 00:45:46.366571 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Nov 8 00:45:46.366590 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:45:46.366602 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:45:46.366615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:45:46.366627 kernel: landlock: Up and running. Nov 8 00:45:46.366639 kernel: SELinux: Initializing. Nov 8 00:45:46.366659 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.366671 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.366684 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Nov 8 00:45:46.366696 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:45:46.366713 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:45:46.366725 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:45:46.366737 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 8 00:45:46.366749 kernel: ... version: 0 Nov 8 00:45:46.366761 kernel: ... bit width: 48 Nov 8 00:45:46.366773 kernel: ... generic registers: 6 Nov 8 00:45:46.366785 kernel: ... value mask: 0000ffffffffffff Nov 8 00:45:46.366797 kernel: ... max period: 00007fffffffffff Nov 8 00:45:46.366809 kernel: ... fixed-purpose events: 0 Nov 8 00:45:46.366825 kernel: ... event mask: 000000000000003f Nov 8 00:45:46.366838 kernel: signal: max sigframe size: 3376 Nov 8 00:45:46.366850 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:45:46.366863 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:45:46.366875 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:45:46.366887 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:45:46.366899 kernel: .... node #0, CPUs: #1 Nov 8 00:45:46.366910 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:45:46.366923 kernel: smpboot: Max logical packages: 1 Nov 8 00:45:46.366939 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Nov 8 00:45:46.366951 kernel: devtmpfs: initialized Nov 8 00:45:46.366963 kernel: x86/mm: Memory block size: 128MB Nov 8 00:45:46.366976 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:45:46.366988 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:45:46.367000 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:45:46.367012 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:45:46.367024 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:45:46.367036 kernel: audit: type=2000 audit(1762562743.721:1): state=initialized audit_enabled=0 res=1 Nov 8 00:45:46.367200 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:45:46.367212 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:45:46.367224 kernel: cpuidle: using governor menu Nov 8 00:45:46.367236 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:45:46.367254 kernel: dca service started, version 1.12.1 Nov 8 00:45:46.367266 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 00:45:46.367279 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 00:45:46.367290 kernel: PCI: Using configuration type 1 for base access Nov 8 00:45:46.367303 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:45:46.367320 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:45:46.367332 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:45:46.367344 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:45:46.367356 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:45:46.367368 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:45:46.367381 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:45:46.367398 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:45:46.367411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:45:46.367423 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:45:46.367440 kernel: ACPI: Interpreter enabled Nov 8 00:45:46.367452 kernel: ACPI: PM: (supports S0 S3 S5) Nov 8 00:45:46.367464 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:45:46.367501 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:45:46.367513 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 00:45:46.367525 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 00:45:46.367537 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:45:46.367946 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:45:46.368190 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 8 00:45:46.368402 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 8 00:45:46.368419 kernel: PCI host bridge to bus 0000:00 Nov 8 00:45:46.370696 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 00:45:46.370891 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 00:45:46.371263 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 00:45:46.371453 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 00:45:46.371680 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 00:45:46.371865 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Nov 8 00:45:46.372052 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:45:46.372337 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 00:45:46.387974 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Nov 8 00:45:46.388194 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Nov 8 00:45:46.388410 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Nov 8 00:45:46.388634 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Nov 8 00:45:46.388836 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 00:45:46.389072 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 Nov 8 00:45:46.389277 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc000-0xc03f] Nov 8 00:45:46.389499 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Nov 8 00:45:46.389706 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Nov 8 00:45:46.389928 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 00:45:46.390131 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Nov 8 00:45:46.390330 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Nov 8 00:45:46.390635 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Nov 8 00:45:46.390868 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Nov 8 00:45:46.391115 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 00:45:46.391349 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 00:45:46.391648 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 00:45:46.391863 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0c0-0xc0df] Nov 8 00:45:46.392068 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd3000-0xfebd3fff] Nov 8 00:45:46.392298 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 00:45:46.392526 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 00:45:46.392543 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 00:45:46.392555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 00:45:46.392573 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 00:45:46.392586 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 00:45:46.392606 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 00:45:46.392618 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 00:45:46.392630 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 00:45:46.392642 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 00:45:46.392653 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 00:45:46.392666 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 00:45:46.392677 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 00:45:46.392694 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 00:45:46.392706 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 00:45:46.392718 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 00:45:46.392730 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 00:45:46.392742 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 00:45:46.392754 kernel: iommu: Default domain type: Translated Nov 8 00:45:46.392766 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:45:46.392778 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:45:46.392790 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 00:45:46.392806 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Nov 8 00:45:46.392818 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Nov 8 00:45:46.393024 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 00:45:46.393231 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 00:45:46.393440 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 00:45:46.393458 kernel: vgaarb: loaded Nov 8 00:45:46.393487 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 8 00:45:46.393500 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 8 00:45:46.393518 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 00:45:46.393530 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:45:46.393542 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:45:46.393554 kernel: pnp: PnP ACPI init Nov 8 00:45:46.393807 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 00:45:46.393826 kernel: pnp: PnP ACPI: found 5 devices Nov 8 00:45:46.393839 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:45:46.393850 kernel: NET: Registered PF_INET protocol family Nov 8 00:45:46.393868 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:45:46.393880 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:45:46.393892 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:45:46.393904 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:45:46.393916 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:45:46.393927 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:45:46.393939 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.393951 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:45:46.393963 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:45:46.393980 kernel: NET: Registered PF_XDP protocol family Nov 8 00:45:46.394170 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 00:45:46.394358 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 00:45:46.394566 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 00:45:46.394757 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 00:45:46.394950 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 00:45:46.395160 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Nov 8 00:45:46.395180 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:45:46.395199 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:45:46.395211 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Nov 8 00:45:46.395222 kernel: Initialise system trusted keyrings Nov 8 00:45:46.395232 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:45:46.395243 kernel: Key type asymmetric registered Nov 8 00:45:46.395253 kernel: Asymmetric key parser 'x509' registered Nov 8 00:45:46.395264 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:45:46.395275 kernel: io scheduler mq-deadline registered Nov 8 00:45:46.395285 kernel: io scheduler kyber registered Nov 8 00:45:46.395301 kernel: io scheduler bfq registered Nov 8 00:45:46.395312 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:45:46.395324 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 00:45:46.395336 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 00:45:46.395348 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:45:46.395360 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:45:46.395372 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 00:45:46.395384 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 00:45:46.395396 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 00:45:46.395415 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 00:45:46.395683 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 00:45:46.395891 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 00:45:46.396090 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T00:45:45 UTC (1762562745) Nov 8 00:45:46.396287 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 8 00:45:46.396303 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 8 00:45:46.396315 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:45:46.396327 kernel: Segment Routing with IPv6 Nov 8 00:45:46.396346 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:45:46.396359 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:45:46.396371 kernel: Key type dns_resolver registered Nov 8 00:45:46.396383 kernel: IPI shorthand broadcast: enabled Nov 8 00:45:46.396395 kernel: sched_clock: Marking stable (3618007703, 326161483)->(4120768569, -176599383) Nov 8 00:45:46.396407 kernel: registered taskstats version 1 Nov 8 00:45:46.396420 kernel: Loading compiled-in X.509 certificates Nov 8 00:45:46.396432 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:45:46.396444 kernel: Key type .fscrypt registered Nov 8 00:45:46.396460 kernel: Key type fscrypt-provisioning registered Nov 8 00:45:46.396490 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:45:46.396502 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:45:46.396514 kernel: ima: No architecture policies found Nov 8 00:45:46.396527 kernel: clk: Disabling unused clocks Nov 8 00:45:46.396539 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:45:46.396551 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:45:46.396563 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:45:46.396575 kernel: Run /init as init process Nov 8 00:45:46.396592 kernel: with arguments: Nov 8 00:45:46.396604 kernel: /init Nov 8 00:45:46.396616 kernel: with environment: Nov 8 00:45:46.396628 kernel: HOME=/ Nov 8 00:45:46.396640 kernel: TERM=linux Nov 8 00:45:46.396655 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:45:46.396671 systemd[1]: Detected virtualization kvm. Nov 8 00:45:46.396684 systemd[1]: Detected architecture x86-64. Nov 8 00:45:46.396700 systemd[1]: Running in initrd. Nov 8 00:45:46.396713 systemd[1]: No hostname configured, using default hostname. Nov 8 00:45:46.396725 systemd[1]: Hostname set to . Nov 8 00:45:46.396738 systemd[1]: Initializing machine ID from random generator. Nov 8 00:45:46.396751 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:45:46.396764 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:45:46.396798 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:45:46.396816 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:45:46.396829 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:45:46.396843 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:45:46.396857 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:45:46.396872 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:45:46.396890 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:45:46.396903 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:45:46.396916 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:45:46.396930 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:45:46.396947 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:45:46.396960 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:45:46.396974 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:45:46.396987 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:45:46.397000 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:45:46.397018 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:45:46.397031 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:45:46.397044 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:45:46.397058 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:45:46.397071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:45:46.397084 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:45:46.397098 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:45:46.397111 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:45:46.397128 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:45:46.397142 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:45:46.397155 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:45:46.397169 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:45:46.397214 systemd-journald[179]: Collecting audit messages is disabled. Nov 8 00:45:46.397248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:45:46.397262 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:45:46.397279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:45:46.397293 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:45:46.397312 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:45:46.397326 systemd-journald[179]: Journal started Nov 8 00:45:46.397352 systemd-journald[179]: Runtime Journal (/run/log/journal/37e5a92bcf4b4c529c9c434f6c64d811) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:45:46.404597 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:45:46.410559 systemd-modules-load[180]: Inserted module 'overlay' Nov 8 00:45:46.515416 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:45:46.515441 kernel: Bridge firewalling registered Nov 8 00:45:46.412669 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:45:46.457132 systemd-modules-load[180]: Inserted module 'br_netfilter' Nov 8 00:45:46.513237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:45:46.523666 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:46.526751 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:45:46.535674 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:45:46.537987 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:45:46.553620 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:45:46.556151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:45:46.570198 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:46.572785 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:45:46.581653 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:45:46.586641 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:45:46.589676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:45:46.594342 dracut-cmdline[212]: dracut-dracut-053 Nov 8 00:45:46.599258 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:45:46.633961 systemd-resolved[213]: Positive Trust Anchors: Nov 8 00:45:46.633978 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:45:46.634006 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:45:46.638553 systemd-resolved[213]: Defaulting to hostname 'linux'. Nov 8 00:45:46.642768 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:45:46.646132 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:45:46.690527 kernel: SCSI subsystem initialized Nov 8 00:45:46.700499 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:45:46.711509 kernel: iscsi: registered transport (tcp) Nov 8 00:45:46.733142 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:45:46.733183 kernel: QLogic iSCSI HBA Driver Nov 8 00:45:46.782799 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:45:46.789616 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:45:46.816502 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:45:46.816540 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:45:46.819119 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:45:46.863500 kernel: raid6: avx2x4 gen() 32739 MB/s Nov 8 00:45:46.881499 kernel: raid6: avx2x2 gen() 27552 MB/s Nov 8 00:45:46.901888 kernel: raid6: avx2x1 gen() 24647 MB/s Nov 8 00:45:46.901917 kernel: raid6: using algorithm avx2x4 gen() 32739 MB/s Nov 8 00:45:46.922802 kernel: raid6: .... xor() 4497 MB/s, rmw enabled Nov 8 00:45:46.922832 kernel: raid6: using avx2x2 recovery algorithm Nov 8 00:45:46.943523 kernel: xor: automatically using best checksumming function avx Nov 8 00:45:47.157698 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:45:47.181313 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:45:47.189770 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:45:47.221829 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:45:47.228270 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:45:47.239710 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:45:47.262605 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Nov 8 00:45:47.306279 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:45:47.312640 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:45:47.430669 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:45:47.436691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:45:47.606397 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:45:47.609019 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:45:47.610909 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:45:47.612695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:45:47.620663 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:45:47.639793 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:45:47.649538 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:45:47.654890 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:45:47.655102 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:47.662721 kernel: scsi host0: Virtio SCSI HBA Nov 8 00:45:47.659305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:45:47.840584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:45:47.840811 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:47.854601 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Nov 8 00:45:47.841637 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:45:47.855922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:45:47.922421 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:45:47.922534 kernel: libata version 3.00 loaded. Nov 8 00:45:47.926579 kernel: AES CTR mode by8 optimization enabled Nov 8 00:45:47.935540 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 00:45:47.935838 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 00:45:47.949492 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 00:45:47.949710 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 00:45:47.995075 kernel: scsi host1: ahci Nov 8 00:45:47.995413 kernel: scsi host2: ahci Nov 8 00:45:47.995690 kernel: scsi host3: ahci Nov 8 00:45:47.995940 kernel: scsi host4: ahci Nov 8 00:45:47.996163 kernel: scsi host5: ahci Nov 8 00:45:47.996383 kernel: scsi host6: ahci Nov 8 00:45:47.996639 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 Nov 8 00:45:47.997654 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 Nov 8 00:45:48.000512 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 Nov 8 00:45:48.000538 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 Nov 8 00:45:48.000563 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 Nov 8 00:45:48.000574 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 Nov 8 00:45:48.102331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:48.108669 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:45:48.134342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:48.308514 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.318488 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.318525 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.321434 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.324512 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.324596 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 00:45:48.346238 kernel: sd 0:0:0:0: Power-on or device reset occurred Nov 8 00:45:48.346577 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Nov 8 00:45:48.356946 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:45:48.362047 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Nov 8 00:45:48.362455 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Nov 8 00:45:48.369374 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:45:48.369438 kernel: GPT:9289727 != 167739391 Nov 8 00:45:48.369451 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:45:48.373418 kernel: GPT:9289727 != 167739391 Nov 8 00:45:48.373445 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:45:48.377646 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:48.378646 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:45:48.442525 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (461) Nov 8 00:45:48.446235 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Nov 8 00:45:48.461202 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (459) Nov 8 00:45:48.456621 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Nov 8 00:45:48.476487 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Nov 8 00:45:48.477679 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Nov 8 00:45:48.486168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:45:48.493656 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:45:48.502148 disk-uuid[569]: Primary Header is updated. Nov 8 00:45:48.502148 disk-uuid[569]: Secondary Entries is updated. Nov 8 00:45:48.502148 disk-uuid[569]: Secondary Header is updated. Nov 8 00:45:48.509510 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:48.518505 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:49.520863 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:45:49.520923 disk-uuid[570]: The operation has completed successfully. Nov 8 00:45:49.583619 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:45:49.583800 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:45:49.598649 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:45:49.605743 sh[584]: Success Nov 8 00:45:49.636612 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Nov 8 00:45:49.688501 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:45:49.704631 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:45:49.706743 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:45:49.725845 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:45:49.725888 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:49.728798 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:45:49.734281 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:45:49.734304 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:45:49.744491 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 8 00:45:49.746431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:45:49.748091 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:45:49.753651 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:45:49.756647 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:45:49.779016 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:49.779252 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:49.779265 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:45:49.787827 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:45:49.787858 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:45:49.803849 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:49.803590 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:45:49.811199 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:45:49.819727 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:45:49.952586 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:45:50.223744 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:45:50.323340 systemd-networkd[765]: lo: Link UP Nov 8 00:45:50.324717 systemd-networkd[765]: lo: Gained carrier Nov 8 00:45:50.327310 systemd-networkd[765]: Enumeration completed Nov 8 00:45:50.328630 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:45:50.330440 systemd[1]: Reached target network.target - Network. Nov 8 00:45:50.332293 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:45:50.332300 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:45:50.354086 systemd-networkd[765]: eth0: Link UP Nov 8 00:45:50.354883 systemd-networkd[765]: eth0: Gained carrier Nov 8 00:45:50.355785 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:45:50.361196 ignition[700]: Ignition 2.19.0 Nov 8 00:45:50.361212 ignition[700]: Stage: fetch-offline Nov 8 00:45:50.361271 ignition[700]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:50.363824 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:45:50.361284 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:50.361460 ignition[700]: parsed url from cmdline: "" Nov 8 00:45:50.361466 ignition[700]: no config URL provided Nov 8 00:45:50.361492 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:45:50.361505 ignition[700]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:45:50.361511 ignition[700]: failed to fetch config: resource requires networking Nov 8 00:45:50.361809 ignition[700]: Ignition finished successfully Nov 8 00:45:50.372658 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:45:50.434372 ignition[773]: Ignition 2.19.0 Nov 8 00:45:50.434387 ignition[773]: Stage: fetch Nov 8 00:45:50.434650 ignition[773]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:50.434670 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:50.434838 ignition[773]: parsed url from cmdline: "" Nov 8 00:45:50.434846 ignition[773]: no config URL provided Nov 8 00:45:50.434857 ignition[773]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:45:50.434874 ignition[773]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:45:50.434921 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #1 Nov 8 00:45:50.435324 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:45:50.635542 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #2 Nov 8 00:45:50.635718 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:45:51.036260 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #3 Nov 8 00:45:51.036460 ignition[773]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Nov 8 00:45:51.350579 systemd-networkd[765]: eth0: DHCPv4 address 172.239.57.65/24, gateway 172.239.57.1 acquired from 23.192.120.224 Nov 8 00:45:51.837401 ignition[773]: PUT http://169.254.169.254/v1/token: attempt #4 Nov 8 00:45:51.932772 ignition[773]: PUT result: OK Nov 8 00:45:51.932830 ignition[773]: GET http://169.254.169.254/v1/user-data: attempt #1 Nov 8 00:45:52.048379 ignition[773]: GET result: OK Nov 8 00:45:52.048533 ignition[773]: parsing config with SHA512: 29d071ba55dabb99893130a9dd74e9c37500469acea8cc5bc6cc0913fa0921b6292a4f765f3ac573d3ebfb9906d934bdb7bce85231f30e762824af70bcc547eb Nov 8 00:45:52.052555 unknown[773]: fetched base config from "system" Nov 8 00:45:52.052568 unknown[773]: fetched base config from "system" Nov 8 00:45:52.053501 ignition[773]: fetch: fetch complete Nov 8 00:45:52.052574 unknown[773]: fetched user config from "akamai" Nov 8 00:45:52.053508 ignition[773]: fetch: fetch passed Nov 8 00:45:52.053559 ignition[773]: Ignition finished successfully Nov 8 00:45:52.058232 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:45:52.065619 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:45:52.082399 ignition[780]: Ignition 2.19.0 Nov 8 00:45:52.082411 ignition[780]: Stage: kargs Nov 8 00:45:52.082642 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:52.082655 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:52.085785 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:45:52.083538 ignition[780]: kargs: kargs passed Nov 8 00:45:52.083589 ignition[780]: Ignition finished successfully Nov 8 00:45:52.092645 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:45:52.129382 systemd-networkd[765]: eth0: Gained IPv6LL Nov 8 00:45:52.153514 ignition[786]: Ignition 2.19.0 Nov 8 00:45:52.153530 ignition[786]: Stage: disks Nov 8 00:45:52.153745 ignition[786]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:52.156640 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:45:52.153760 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:52.154710 ignition[786]: disks: disks passed Nov 8 00:45:52.166092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:45:52.154765 ignition[786]: Ignition finished successfully Nov 8 00:45:52.167795 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:45:52.169367 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:45:52.171166 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:45:52.172586 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:45:52.179614 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:45:52.205818 systemd-fsck[794]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 8 00:45:52.209967 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:45:52.217589 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:45:52.315872 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:45:52.316848 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:45:52.318440 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:45:52.325586 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:45:52.335566 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:45:52.337845 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:45:52.339398 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:45:52.339427 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:45:52.352511 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by mount (802) Nov 8 00:45:52.358090 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:52.358116 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:52.358832 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:45:52.364033 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:45:52.372412 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:45:52.372438 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:45:52.372578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:45:52.376605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:45:52.420505 initrd-setup-root[827]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:45:52.426681 initrd-setup-root[834]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:45:52.431779 initrd-setup-root[841]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:45:52.437884 initrd-setup-root[848]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:45:52.555618 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:45:52.565581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:45:52.569757 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:45:52.575067 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:45:52.579985 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:52.619697 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:45:52.797965 ignition[916]: INFO : Ignition 2.19.0 Nov 8 00:45:52.797965 ignition[916]: INFO : Stage: mount Nov 8 00:45:52.800992 ignition[916]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:52.800992 ignition[916]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:52.800992 ignition[916]: INFO : mount: mount passed Nov 8 00:45:52.800992 ignition[916]: INFO : Ignition finished successfully Nov 8 00:45:52.804285 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:45:52.810584 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:45:53.324618 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:45:53.341152 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (927) Nov 8 00:45:53.341218 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:45:53.346616 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:45:53.346639 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:45:53.354661 kernel: BTRFS info (device sda6): enabling ssd optimizations Nov 8 00:45:53.354691 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:45:53.359451 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:45:53.415384 ignition[944]: INFO : Ignition 2.19.0 Nov 8 00:45:53.415384 ignition[944]: INFO : Stage: files Nov 8 00:45:53.417243 ignition[944]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:53.417243 ignition[944]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:53.419323 ignition[944]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:45:53.420435 ignition[944]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:45:53.420435 ignition[944]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:45:53.423575 ignition[944]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:45:53.424817 ignition[944]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:45:53.426334 unknown[944]: wrote ssh authorized keys file for user: core Nov 8 00:45:53.427616 ignition[944]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:45:53.428735 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:45:53.430143 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:45:53.857134 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:45:54.046809 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:45:54.046809 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:54.051273 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:45:54.553020 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:45:56.259649 ignition[944]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:45:56.261565 ignition[944]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:45:56.261565 ignition[944]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:45:56.270693 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:45:56.270693 ignition[944]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:45:56.270693 ignition[944]: INFO : files: files passed Nov 8 00:45:56.270693 ignition[944]: INFO : Ignition finished successfully Nov 8 00:45:56.267668 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:45:56.284115 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:45:56.290642 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:45:56.303148 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:45:56.303349 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:45:56.316998 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:45:56.316998 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:45:56.321569 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:45:56.320000 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:45:56.321784 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:45:56.328632 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:45:56.375631 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:45:56.375805 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:45:56.378273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:45:56.380123 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:45:56.382107 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:45:56.389703 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:45:56.410876 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:45:56.419659 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:45:56.434347 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:45:56.435249 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:45:56.437115 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:45:56.439255 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:45:56.439362 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:45:56.441584 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:45:56.442794 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:45:56.444680 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:45:56.446497 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:45:56.448077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:45:56.449897 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:45:56.451858 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:45:56.453876 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:45:56.455996 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:45:56.457857 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:45:56.459926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:45:56.460071 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:45:56.462214 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:45:56.463376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:45:56.465142 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:45:56.466362 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:45:56.467223 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:45:56.467354 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:45:56.470040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:45:56.470406 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:45:56.471499 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:45:56.471671 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:45:56.478664 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:45:56.483852 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:45:56.486647 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:45:56.487792 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:45:56.493707 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:45:56.495724 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:45:56.513208 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:45:56.514425 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:45:56.517383 ignition[996]: INFO : Ignition 2.19.0 Nov 8 00:45:56.517383 ignition[996]: INFO : Stage: umount Nov 8 00:45:56.517383 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:45:56.517383 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Nov 8 00:45:56.527000 ignition[996]: INFO : umount: umount passed Nov 8 00:45:56.527000 ignition[996]: INFO : Ignition finished successfully Nov 8 00:45:56.521399 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:45:56.521960 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:45:56.525405 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:45:56.525509 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:45:56.527901 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:45:56.527982 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:45:56.530402 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:45:56.530501 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:45:56.533814 systemd[1]: Stopped target network.target - Network. Nov 8 00:45:56.535299 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:45:56.535368 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:45:56.536186 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:45:56.536902 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:45:56.540247 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:45:56.541097 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:45:56.541813 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:45:56.546597 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:45:56.546664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:45:56.553981 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:45:56.554052 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:45:56.555345 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:45:56.555428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:45:56.557348 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:45:56.557435 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:45:56.558423 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:45:56.559322 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:45:56.566581 systemd-networkd[765]: eth0: DHCPv6 lease lost Nov 8 00:45:56.569094 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:45:56.573870 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:45:56.574097 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:45:56.581071 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:45:56.581314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:45:56.590837 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:45:56.591004 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:45:56.602813 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:45:56.604687 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:45:56.604783 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:45:56.606136 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:45:56.606231 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:45:56.610250 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:45:56.610336 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:45:56.612948 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:45:56.613038 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:45:56.617065 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:45:56.618659 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:45:56.618840 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:45:56.621193 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:45:56.621378 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:45:56.631300 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:45:56.631609 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:45:56.638198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:45:56.638359 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:45:56.641814 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:45:56.641877 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:45:56.643729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:45:56.643825 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:45:56.646330 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:45:56.646402 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:45:56.648002 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:45:56.648100 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:45:56.658669 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:45:56.659881 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:45:56.659973 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:45:56.660989 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:45:56.661063 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:45:56.662091 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:45:56.662171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:45:56.666552 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:45:56.666633 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:45:56.670509 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:45:56.670658 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:45:56.674425 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:45:56.674647 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:45:56.678618 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:45:56.686684 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:45:56.698229 systemd[1]: Switching root. Nov 8 00:45:56.733334 systemd-journald[179]: Journal stopped Nov 8 00:45:58.136933 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Nov 8 00:45:58.136970 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:45:58.136984 kernel: SELinux: policy capability open_perms=1 Nov 8 00:45:58.137184 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:45:58.137200 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:45:58.137210 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:45:58.137221 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:45:58.137231 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:45:58.137241 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:45:58.137252 kernel: audit: type=1403 audit(1762562756.884:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:45:58.137263 systemd[1]: Successfully loaded SELinux policy in 56.065ms. Nov 8 00:45:58.137278 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.766ms. Nov 8 00:45:58.137290 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:45:58.137301 systemd[1]: Detected virtualization kvm. Nov 8 00:45:58.137313 systemd[1]: Detected architecture x86-64. Nov 8 00:45:58.137324 systemd[1]: Detected first boot. Nov 8 00:45:58.137338 systemd[1]: Initializing machine ID from random generator. Nov 8 00:45:58.137350 zram_generator::config[1039]: No configuration found. Nov 8 00:45:58.137361 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:45:58.137373 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:45:58.137384 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:45:58.137395 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:45:58.137407 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:45:58.137422 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:45:58.137433 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:45:58.137444 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:45:58.137456 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:45:58.137467 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:45:58.140998 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:45:58.141015 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:45:58.141033 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:45:58.141045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:45:58.141057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:45:58.141068 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:45:58.141086 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:45:58.141110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:45:58.141127 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:45:58.141143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:45:58.141165 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:45:58.141182 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:45:58.141205 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:45:58.141216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:45:58.141228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:45:58.141240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:45:58.141256 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:45:58.141274 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:45:58.141291 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:45:58.141310 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:45:58.141336 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:45:58.141356 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:45:58.141376 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:45:58.141403 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:45:58.141418 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:45:58.141430 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:45:58.141443 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:45:58.141455 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:45:58.141467 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:45:58.141495 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:45:58.141507 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:45:58.141524 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:45:58.141536 systemd[1]: Reached target machines.target - Containers. Nov 8 00:45:58.141549 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:45:58.141561 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:45:58.141573 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:45:58.141585 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:45:58.141597 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:45:58.141609 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:45:58.141625 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:45:58.141637 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:45:58.141648 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:45:58.141661 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:45:58.141673 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:45:58.141685 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:45:58.141697 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:45:58.141709 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:45:58.141724 kernel: ACPI: bus type drm_connector registered Nov 8 00:45:58.141735 kernel: fuse: init (API version 7.39) Nov 8 00:45:58.141747 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:45:58.141759 kernel: loop: module loaded Nov 8 00:45:58.141770 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:45:58.141783 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:45:58.141795 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:45:58.141807 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:45:58.141819 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:45:58.141834 systemd[1]: Stopped verity-setup.service. Nov 8 00:45:58.141847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:45:58.141859 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:45:58.141871 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:45:58.141883 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:45:58.141895 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:45:58.141907 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:45:58.141944 systemd-journald[1129]: Collecting audit messages is disabled. Nov 8 00:45:58.141971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:45:58.141984 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:45:58.142178 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:45:58.142190 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:45:58.142202 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:45:58.142217 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:45:58.142230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:45:58.142242 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:45:58.142254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:45:58.142266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:45:58.142278 systemd-journald[1129]: Journal started Nov 8 00:45:58.142302 systemd-journald[1129]: Runtime Journal (/run/log/journal/5a161a159bb24338939d661f2935f7a1) is 8.0M, max 78.3M, 70.3M free. Nov 8 00:45:57.640552 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:45:57.662017 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:45:57.662671 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:45:58.149370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:45:58.149401 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:45:58.152993 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:45:58.153373 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:45:58.154769 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:45:58.154950 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:45:58.156410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:45:58.157779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:45:58.159237 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:45:58.176202 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:45:58.187539 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:45:58.197606 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:45:58.198571 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:45:58.198604 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:45:58.204371 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:45:58.207638 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:45:58.212583 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:45:58.213692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:45:58.238008 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:45:58.251658 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:45:58.253569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:45:58.273227 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:45:58.275581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:45:58.279630 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:45:58.295716 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:45:58.298646 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:45:58.301654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:45:58.303903 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:45:58.306010 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:45:58.318095 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:45:58.346948 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:45:58.456923 systemd-journald[1129]: Time spent on flushing to /var/log/journal/5a161a159bb24338939d661f2935f7a1 is 17.491ms for 979 entries. Nov 8 00:45:58.456923 systemd-journald[1129]: System Journal (/var/log/journal/5a161a159bb24338939d661f2935f7a1) is 8.0M, max 195.6M, 187.6M free. Nov 8 00:45:58.514687 systemd-journald[1129]: Received client request to flush runtime journal. Nov 8 00:45:58.514747 kernel: loop0: detected capacity change from 0 to 8 Nov 8 00:45:58.488642 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:45:58.492073 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:45:58.525062 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:45:58.527154 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:45:58.531795 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:45:58.542897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:45:58.559831 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:45:58.754415 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:45:58.761312 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 00:45:58.779938 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:45:58.844212 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Nov 8 00:45:58.844247 systemd-tmpfiles[1160]: ACLs are not supported, ignoring. Nov 8 00:45:58.892604 kernel: loop2: detected capacity change from 0 to 219144 Nov 8 00:45:58.896236 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:45:58.913820 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:45:59.310377 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 00:45:59.468881 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:45:59.490984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:45:59.503265 kernel: loop4: detected capacity change from 0 to 8 Nov 8 00:46:00.684965 kernel: loop5: detected capacity change from 0 to 142488 Nov 8 00:46:00.871510 kernel: loop6: detected capacity change from 0 to 219144 Nov 8 00:46:00.888129 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 8 00:46:00.888168 systemd-tmpfiles[1185]: ACLs are not supported, ignoring. Nov 8 00:46:00.920628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:46:00.937536 kernel: loop7: detected capacity change from 0 to 140768 Nov 8 00:46:01.063442 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Nov 8 00:46:01.065561 (sd-merge)[1186]: Merged extensions into '/usr'. Nov 8 00:46:01.074160 systemd[1]: Reloading requested from client PID 1159 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:46:01.074408 systemd[1]: Reloading... Nov 8 00:46:01.699514 zram_generator::config[1213]: No configuration found. Nov 8 00:46:01.797180 ldconfig[1154]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:46:01.880765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:46:01.930195 systemd[1]: Reloading finished in 854 ms. Nov 8 00:46:01.966927 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:46:01.969002 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:46:01.971282 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:46:01.982729 systemd[1]: Starting ensure-sysext.service... Nov 8 00:46:01.985700 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:46:01.993715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:46:02.013085 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:46:02.013108 systemd[1]: Reloading... Nov 8 00:46:02.054275 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:46:02.055627 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:46:02.059625 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:46:02.060240 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 8 00:46:02.061763 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Nov 8 00:46:02.070789 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:46:02.071725 systemd-tmpfiles[1259]: Skipping /boot Nov 8 00:46:02.078385 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Nov 8 00:46:02.105378 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:46:02.106744 systemd-tmpfiles[1259]: Skipping /boot Nov 8 00:46:02.148525 zram_generator::config[1293]: No configuration found. Nov 8 00:46:02.515940 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:46:02.593107 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:46:02.595015 systemd[1]: Reloading finished in 581 ms. Nov 8 00:46:02.622057 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:46:02.630857 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:46:02.644522 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Nov 8 00:46:02.666732 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 00:46:02.665714 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:46:02.689849 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 00:46:02.690186 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 00:46:02.690397 kernel: ACPI: button: Power Button [PWRF] Nov 8 00:46:02.694670 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:46:02.696887 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:46:02.700659 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:46:02.704656 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:46:02.707707 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:46:02.927497 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:46:02.928241 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:46:02.970128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:46:02.973718 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:46:02.986285 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:46:02.988179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:46:02.988285 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:46:03.001848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:46:03.003838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:46:03.004080 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:46:03.011090 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:46:03.011319 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:46:03.040120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:46:03.041070 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:46:03.041215 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:46:03.044833 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:46:03.050245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:46:03.050459 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:46:03.056186 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:46:03.057288 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:46:03.057396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:46:03.077854 systemd[1]: Finished ensure-sysext.service. Nov 8 00:46:03.090859 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:46:03.098500 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 8 00:46:03.100994 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:46:03.101309 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:46:03.133779 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:46:03.133987 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:46:03.139512 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:46:03.139730 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:46:03.146037 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:46:03.146610 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:46:03.182448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:46:03.184210 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:46:03.185552 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:46:03.212644 augenrules[1397]: No rules Nov 8 00:46:03.214445 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:46:03.216604 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:46:03.239771 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:46:03.249510 kernel: EDAC MC: Ver: 3.0.0 Nov 8 00:46:03.262772 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:46:03.265728 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:46:03.320867 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:46:03.320946 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1300) Nov 8 00:46:03.297744 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:46:03.386840 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:46:03.495509 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Nov 8 00:46:03.502744 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:46:03.531434 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:46:03.558708 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:46:03.594504 lvm[1420]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:46:03.601008 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:46:03.628576 systemd-networkd[1363]: lo: Link UP Nov 8 00:46:03.628585 systemd-networkd[1363]: lo: Gained carrier Nov 8 00:46:03.629999 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:46:03.631999 systemd-resolved[1364]: Positive Trust Anchors: Nov 8 00:46:03.633361 systemd-networkd[1363]: Enumeration completed Nov 8 00:46:03.635099 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:46:03.635191 systemd-resolved[1364]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:46:03.635668 systemd-resolved[1364]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:46:03.637527 systemd-networkd[1363]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:46:03.639399 systemd-networkd[1363]: eth0: Link UP Nov 8 00:46:03.641745 systemd-networkd[1363]: eth0: Gained carrier Nov 8 00:46:03.641835 systemd-networkd[1363]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:46:03.649028 systemd-resolved[1364]: Defaulting to hostname 'linux'. Nov 8 00:46:03.719746 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:46:03.720788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:46:03.721992 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:46:03.723138 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:46:03.725142 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:46:03.726103 systemd[1]: Reached target network.target - Network. Nov 8 00:46:03.726853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:46:03.727730 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:46:03.728648 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:46:03.729525 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:46:03.730365 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:46:03.731399 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:46:03.731435 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:46:03.732165 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:46:03.733397 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:46:03.734307 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:46:03.735154 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:46:03.737576 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:46:03.746429 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:46:03.756633 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:46:03.759716 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:46:03.763703 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:46:03.765818 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:46:03.768167 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:46:03.769761 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:46:03.772614 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:46:03.772675 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:46:03.774140 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:46:03.779718 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:46:03.783720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:46:03.787762 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:46:03.790735 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:46:03.791545 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:46:03.793687 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:46:03.799351 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:46:03.825982 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:46:03.808665 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:46:03.826452 jq[1435]: false Nov 8 00:46:03.836608 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:46:03.851670 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:46:03.853364 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:46:03.855062 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:46:03.892859 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:46:03.900965 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:46:03.906681 dbus-daemon[1434]: [system] SELinux support is enabled Nov 8 00:46:03.904419 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:46:03.904678 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:46:03.909045 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:46:03.926782 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:46:03.927956 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:46:03.928179 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:46:03.935558 extend-filesystems[1436]: Found loop4 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found loop5 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found loop6 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found loop7 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda1 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda2 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda3 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found usr Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda4 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda6 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda7 Nov 8 00:46:03.949730 extend-filesystems[1436]: Found sda9 Nov 8 00:46:03.949730 extend-filesystems[1436]: Checking size of /dev/sda9 Nov 8 00:46:03.949114 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:46:04.076745 update_engine[1443]: I20251108 00:46:04.038968 1443 main.cc:92] Flatcar Update Engine starting Nov 8 00:46:04.076745 update_engine[1443]: I20251108 00:46:04.040735 1443 update_check_scheduler.cc:74] Next update check in 5m10s Nov 8 00:46:03.949629 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:46:03.957176 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:46:03.957235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:46:03.961132 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:46:03.961341 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:46:04.040994 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:46:04.082867 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:46:04.112959 jq[1445]: true Nov 8 00:46:04.124266 tar[1448]: linux-amd64/LICENSE Nov 8 00:46:04.124595 coreos-metadata[1433]: Nov 08 00:46:04.124 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 8 00:46:04.146498 tar[1448]: linux-amd64/helm Nov 8 00:46:04.166061 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:46:04.167291 extend-filesystems[1436]: Resized partition /dev/sda9 Nov 8 00:46:04.296357 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Nov 8 00:46:04.296459 extend-filesystems[1475]: resize2fs 1.47.1 (20-May-2024) Nov 8 00:46:04.307948 jq[1470]: true Nov 8 00:46:04.460926 systemd-logind[1442]: Watching system buttons on /dev/input/event1 (Power Button) Nov 8 00:46:04.460966 systemd-logind[1442]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:46:04.479408 systemd-logind[1442]: New seat seat0. Nov 8 00:46:04.521060 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:46:04.978534 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1321) Nov 8 00:46:05.064382 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:46:05.066331 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:46:05.134752 systemd[1]: Starting sshkeys.service... Nov 8 00:46:05.239905 coreos-metadata[1433]: Nov 08 00:46:05.239 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 8 00:46:05.317623 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 00:46:05.327547 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 00:46:05.390888 locksmithd[1466]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:46:05.525691 systemd-networkd[1363]: eth0: Gained IPv6LL Nov 8 00:46:05.541292 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Nov 8 00:46:05.591158 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:46:05.629161 coreos-metadata[1506]: Nov 08 00:46:05.629 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Nov 8 00:46:05.749072 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:46:05.769069 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:46:05.784885 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:46:05.785602 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:46:05.796501 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Nov 8 00:46:05.821298 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:46:05.876104 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:46:05.884071 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:46:05.890667 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:46:05.891640 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:46:05.946142 containerd[1471]: time="2025-11-08T00:46:05.945186244Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:46:05.948008 extend-filesystems[1475]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Nov 8 00:46:05.948008 extend-filesystems[1475]: old_desc_blocks = 1, new_desc_blocks = 10 Nov 8 00:46:05.948008 extend-filesystems[1475]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Nov 8 00:46:05.969002 extend-filesystems[1436]: Resized filesystem in /dev/sda9 Nov 8 00:46:05.951782 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:46:05.952222 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.001797958Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004208055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004234065Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004249975Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004522685Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004541895Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004618165Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004632285Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004835795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004852125Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.005965 containerd[1471]: time="2025-11-08T00:46:06.004864875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.004874535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.004984364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.005298704Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.005504134Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.005519464Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.005658904Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:46:06.006308 containerd[1471]: time="2025-11-08T00:46:06.005727574Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:46:06.044874 containerd[1471]: time="2025-11-08T00:46:06.044806705Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:46:06.045059 containerd[1471]: time="2025-11-08T00:46:06.044908684Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:46:06.045095 containerd[1471]: time="2025-11-08T00:46:06.045074144Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:46:06.045142 containerd[1471]: time="2025-11-08T00:46:06.045115424Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:46:06.045142 containerd[1471]: time="2025-11-08T00:46:06.045130924Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:46:06.046699 containerd[1471]: time="2025-11-08T00:46:06.046660293Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:46:06.047045 containerd[1471]: time="2025-11-08T00:46:06.047009832Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:46:06.047256 containerd[1471]: time="2025-11-08T00:46:06.047223332Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:46:06.047256 containerd[1471]: time="2025-11-08T00:46:06.047246212Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:46:06.047296 containerd[1471]: time="2025-11-08T00:46:06.047259132Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:46:06.047330 containerd[1471]: time="2025-11-08T00:46:06.047292852Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047330 containerd[1471]: time="2025-11-08T00:46:06.047307182Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047330 containerd[1471]: time="2025-11-08T00:46:06.047318822Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047392 containerd[1471]: time="2025-11-08T00:46:06.047344682Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047392 containerd[1471]: time="2025-11-08T00:46:06.047378952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047433 containerd[1471]: time="2025-11-08T00:46:06.047397182Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047433 containerd[1471]: time="2025-11-08T00:46:06.047409642Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047433 containerd[1471]: time="2025-11-08T00:46:06.047420782Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:46:06.047565 containerd[1471]: time="2025-11-08T00:46:06.047455682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047565 containerd[1471]: time="2025-11-08T00:46:06.047469252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047565 containerd[1471]: time="2025-11-08T00:46:06.047520692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047565 containerd[1471]: time="2025-11-08T00:46:06.047532782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047565 containerd[1471]: time="2025-11-08T00:46:06.047544602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047556312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047601472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047618402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047666182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047682082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047693362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047704262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047715482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047747682Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047766442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047777862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047787732Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047861032Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:46:06.047891 containerd[1471]: time="2025-11-08T00:46:06.047878202Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:46:06.048134 containerd[1471]: time="2025-11-08T00:46:06.047888691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:46:06.048134 containerd[1471]: time="2025-11-08T00:46:06.047965231Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:46:06.048134 containerd[1471]: time="2025-11-08T00:46:06.047996951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.048134 containerd[1471]: time="2025-11-08T00:46:06.048011131Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:46:06.048134 containerd[1471]: time="2025-11-08T00:46:06.048026611Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:46:06.048134 containerd[1471]: time="2025-11-08T00:46:06.048037051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:46:06.049132 containerd[1471]: time="2025-11-08T00:46:06.048442381Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:46:06.049132 containerd[1471]: time="2025-11-08T00:46:06.048537811Z" level=info msg="Connect containerd service" Nov 8 00:46:06.049132 containerd[1471]: time="2025-11-08T00:46:06.048590891Z" level=info msg="using legacy CRI server" Nov 8 00:46:06.049132 containerd[1471]: time="2025-11-08T00:46:06.048599831Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:46:06.049132 containerd[1471]: time="2025-11-08T00:46:06.048729161Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:46:06.049665 containerd[1471]: time="2025-11-08T00:46:06.049314630Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:46:06.049665 containerd[1471]: time="2025-11-08T00:46:06.049593740Z" level=info msg="Start subscribing containerd event" Nov 8 00:46:06.049665 containerd[1471]: time="2025-11-08T00:46:06.049648420Z" level=info msg="Start recovering state" Nov 8 00:46:06.049741 containerd[1471]: time="2025-11-08T00:46:06.049715320Z" level=info msg="Start event monitor" Nov 8 00:46:06.049741 containerd[1471]: time="2025-11-08T00:46:06.049737010Z" level=info msg="Start snapshots syncer" Nov 8 00:46:06.049777 containerd[1471]: time="2025-11-08T00:46:06.049752870Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:46:06.049777 containerd[1471]: time="2025-11-08T00:46:06.049761240Z" level=info msg="Start streaming server" Nov 8 00:46:06.050758 containerd[1471]: time="2025-11-08T00:46:06.050345379Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:46:06.050758 containerd[1471]: time="2025-11-08T00:46:06.050417499Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:46:06.050582 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:46:06.051433 containerd[1471]: time="2025-11-08T00:46:06.051403678Z" level=info msg="containerd successfully booted in 0.169012s" Nov 8 00:46:06.641265 coreos-metadata[1506]: Nov 08 00:46:06.641 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Nov 8 00:46:06.725838 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1363 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 00:46:06.725949 systemd-networkd[1363]: eth0: DHCPv4 address 172.239.57.65/24, gateway 172.239.57.1 acquired from 23.192.120.224 Nov 8 00:46:06.728776 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Nov 8 00:46:06.729319 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Nov 8 00:46:06.729837 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Nov 8 00:46:06.756538 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 00:46:06.758149 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:46:06.761055 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:46:06.772642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:06.792521 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:46:06.917577 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:46:07.141797 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 00:46:07.142547 dbus-daemon[1434]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1533 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 00:46:07.151237 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 00:46:07.188042 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 00:46:07.232371 polkitd[1545]: Started polkitd version 121 Nov 8 00:46:07.253400 polkitd[1545]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 00:46:07.253509 polkitd[1545]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 00:46:07.255489 polkitd[1545]: Finished loading, compiling and executing 2 rules Nov 8 00:46:07.256536 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 00:46:07.256137 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 00:46:07.259227 polkitd[1545]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 00:46:07.286924 coreos-metadata[1433]: Nov 08 00:46:07.286 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Nov 8 00:46:07.311505 tar[1448]: linux-amd64/README.md Nov 8 00:46:07.359174 systemd-resolved[1364]: System hostname changed to '172-239-57-65'. Nov 8 00:46:07.359384 systemd-hostnamed[1533]: Hostname set to <172-239-57-65> (transient) Nov 8 00:46:07.369146 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:46:07.467370 coreos-metadata[1433]: Nov 08 00:46:07.466 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Nov 8 00:46:07.654053 coreos-metadata[1433]: Nov 08 00:46:07.653 INFO Fetch successful Nov 8 00:46:07.654233 coreos-metadata[1433]: Nov 08 00:46:07.654 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Nov 8 00:46:07.731140 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:46:07.764733 systemd[1]: Started sshd@0-172.239.57.65:22-147.75.109.163:54572.service - OpenSSH per-connection server daemon (147.75.109.163:54572). Nov 8 00:46:08.012238 coreos-metadata[1433]: Nov 08 00:46:08.011 INFO Fetch successful Nov 8 00:46:08.111167 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Nov 8 00:46:08.163501 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:46:08.164815 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:46:08.188380 sshd[1560]: Accepted publickey for core from 147.75.109.163 port 54572 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:08.211942 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:08.242276 systemd-logind[1442]: New session 1 of user core. Nov 8 00:46:08.246151 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:46:08.258298 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:46:08.315729 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:46:08.334642 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:46:08.364109 (systemd)[1583]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:46:08.614647 systemd[1583]: Queued start job for default target default.target. Nov 8 00:46:08.623468 systemd[1583]: Created slice app.slice - User Application Slice. Nov 8 00:46:08.623609 systemd[1583]: Reached target paths.target - Paths. Nov 8 00:46:08.623708 systemd[1583]: Reached target timers.target - Timers. Nov 8 00:46:08.626764 systemd[1583]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:46:08.644318 systemd[1583]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:46:08.644724 systemd[1583]: Reached target sockets.target - Sockets. Nov 8 00:46:08.644855 systemd[1583]: Reached target basic.target - Basic System. Nov 8 00:46:08.645458 systemd[1583]: Reached target default.target - Main User Target. Nov 8 00:46:08.645547 systemd[1583]: Startup finished in 256ms. Nov 8 00:46:08.645848 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:46:08.655886 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:46:08.658243 coreos-metadata[1506]: Nov 08 00:46:08.657 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Nov 8 00:46:08.753609 coreos-metadata[1506]: Nov 08 00:46:08.753 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Nov 8 00:46:08.950972 coreos-metadata[1506]: Nov 08 00:46:08.950 INFO Fetch successful Nov 8 00:46:08.991105 systemd[1]: Started sshd@1-172.239.57.65:22-147.75.109.163:54588.service - OpenSSH per-connection server daemon (147.75.109.163:54588). Nov 8 00:46:09.019857 update-ssh-keys[1596]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:46:09.023582 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 00:46:09.027337 systemd[1]: Finished sshkeys.service. Nov 8 00:46:09.526071 sshd[1598]: Accepted publickey for core from 147.75.109.163 port 54588 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:09.522307 systemd-logind[1442]: New session 2 of user core. Nov 8 00:46:09.515082 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:09.530804 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:46:09.619330 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:09.621158 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:46:09.622625 systemd[1]: Startup finished in 3.789s (kernel) + 11.018s (initrd) + 12.793s (userspace) = 27.601s. Nov 8 00:46:09.655745 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:46:09.784463 sshd[1598]: pam_unix(sshd:session): session closed for user core Nov 8 00:46:09.788634 systemd[1]: sshd@1-172.239.57.65:22-147.75.109.163:54588.service: Deactivated successfully. Nov 8 00:46:09.791284 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:46:09.792203 systemd-logind[1442]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:46:09.795183 systemd-logind[1442]: Removed session 2. Nov 8 00:46:09.871859 systemd[1]: Started sshd@2-172.239.57.65:22-147.75.109.163:43012.service - OpenSSH per-connection server daemon (147.75.109.163:43012). Nov 8 00:46:10.211762 sshd[1617]: Accepted publickey for core from 147.75.109.163 port 43012 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:10.230704 sshd[1617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:10.237717 systemd-logind[1442]: New session 3 of user core. Nov 8 00:46:10.242690 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:46:10.605409 sshd[1617]: pam_unix(sshd:session): session closed for user core Nov 8 00:46:10.627551 systemd[1]: sshd@2-172.239.57.65:22-147.75.109.163:43012.service: Deactivated successfully. Nov 8 00:46:10.631135 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:46:10.632160 systemd-logind[1442]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:46:10.634222 systemd-logind[1442]: Removed session 3. Nov 8 00:46:10.701855 systemd[1]: Started sshd@3-172.239.57.65:22-147.75.109.163:43014.service - OpenSSH per-connection server daemon (147.75.109.163:43014). Nov 8 00:46:10.892517 kubelet[1608]: E1108 00:46:10.890838 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:46:10.894963 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:46:10.895221 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:46:10.896314 systemd[1]: kubelet.service: Consumed 2.938s CPU time. Nov 8 00:46:11.042294 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 43014 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:11.044427 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:11.052021 systemd-logind[1442]: New session 4 of user core. Nov 8 00:46:11.059652 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:46:11.344498 sshd[1628]: pam_unix(sshd:session): session closed for user core Nov 8 00:46:11.348343 systemd-logind[1442]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:46:11.349047 systemd[1]: sshd@3-172.239.57.65:22-147.75.109.163:43014.service: Deactivated successfully. Nov 8 00:46:11.351194 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:46:11.352262 systemd-logind[1442]: Removed session 4. Nov 8 00:46:11.407333 systemd[1]: Started sshd@4-172.239.57.65:22-147.75.109.163:43024.service - OpenSSH per-connection server daemon (147.75.109.163:43024). Nov 8 00:46:11.754366 sshd[1637]: Accepted publickey for core from 147.75.109.163 port 43024 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:11.756718 sshd[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:11.764872 systemd-logind[1442]: New session 5 of user core. Nov 8 00:46:11.771643 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:46:11.973361 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:46:11.973828 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:46:11.991062 sudo[1640]: pam_unix(sudo:session): session closed for user root Nov 8 00:46:12.045692 sshd[1637]: pam_unix(sshd:session): session closed for user core Nov 8 00:46:12.049346 systemd[1]: sshd@4-172.239.57.65:22-147.75.109.163:43024.service: Deactivated successfully. Nov 8 00:46:12.051694 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:46:12.053439 systemd-logind[1442]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:46:12.054823 systemd-logind[1442]: Removed session 5. Nov 8 00:46:12.112811 systemd[1]: Started sshd@5-172.239.57.65:22-147.75.109.163:43040.service - OpenSSH per-connection server daemon (147.75.109.163:43040). Nov 8 00:46:12.445791 sshd[1645]: Accepted publickey for core from 147.75.109.163 port 43040 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:12.448045 sshd[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:12.453450 systemd-logind[1442]: New session 6 of user core. Nov 8 00:46:12.465759 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:46:12.651205 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:46:12.651686 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:46:12.656331 sudo[1649]: pam_unix(sudo:session): session closed for user root Nov 8 00:46:12.664457 sudo[1648]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:46:12.664884 sudo[1648]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:46:12.700951 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:46:12.715232 auditctl[1652]: No rules Nov 8 00:46:12.716006 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:46:12.716360 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:46:12.726239 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:46:12.899568 augenrules[1670]: No rules Nov 8 00:46:12.901593 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:46:12.903579 sudo[1648]: pam_unix(sudo:session): session closed for user root Nov 8 00:46:12.957160 sshd[1645]: pam_unix(sshd:session): session closed for user core Nov 8 00:46:12.961836 systemd[1]: sshd@5-172.239.57.65:22-147.75.109.163:43040.service: Deactivated successfully. Nov 8 00:46:12.964773 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:46:12.966569 systemd-logind[1442]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:46:12.968165 systemd-logind[1442]: Removed session 6. Nov 8 00:46:13.031020 systemd[1]: Started sshd@6-172.239.57.65:22-147.75.109.163:43048.service - OpenSSH per-connection server daemon (147.75.109.163:43048). Nov 8 00:46:13.374771 sshd[1678]: Accepted publickey for core from 147.75.109.163 port 43048 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:46:13.376876 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:46:13.382101 systemd-logind[1442]: New session 7 of user core. Nov 8 00:46:13.389641 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:46:13.582340 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:46:13.582786 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:46:15.336863 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:46:15.370220 (dockerd)[1696]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:46:17.032753 dockerd[1696]: time="2025-11-08T00:46:17.032644847Z" level=info msg="Starting up" Nov 8 00:46:17.469620 systemd[1]: var-lib-docker-metacopy\x2dcheck3048257202-merged.mount: Deactivated successfully. Nov 8 00:46:17.488497 dockerd[1696]: time="2025-11-08T00:46:17.488386221Z" level=info msg="Loading containers: start." Nov 8 00:46:17.660621 kernel: Initializing XFRM netlink socket Nov 8 00:46:17.691493 systemd-timesyncd[1383]: Network configuration changed, trying to establish connection. Nov 8 00:46:17.759931 systemd-networkd[1363]: docker0: Link UP Nov 8 00:46:17.776231 dockerd[1696]: time="2025-11-08T00:46:17.776184603Z" level=info msg="Loading containers: done." Nov 8 00:46:17.817196 dockerd[1696]: time="2025-11-08T00:46:17.817125232Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:46:17.817422 dockerd[1696]: time="2025-11-08T00:46:17.817295292Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:46:17.817617 dockerd[1696]: time="2025-11-08T00:46:17.817584262Z" level=info msg="Daemon has completed initialization" Nov 8 00:46:17.849307 dockerd[1696]: time="2025-11-08T00:46:17.849020140Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:46:17.849259 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:46:18.463495 systemd-resolved[1364]: Clock change detected. Flushing caches. Nov 8 00:46:18.464592 systemd-timesyncd[1383]: Contacted time server [2607:7c80:54:3::32]:123 (2.flatcar.pool.ntp.org). Nov 8 00:46:18.464694 systemd-timesyncd[1383]: Initial clock synchronization to Sat 2025-11-08 00:46:18.463400 UTC. Nov 8 00:46:19.802626 containerd[1471]: time="2025-11-08T00:46:19.802235007Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:46:20.860313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2059578511.mount: Deactivated successfully. Nov 8 00:46:21.584736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:46:21.594428 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:22.049354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:22.058769 (kubelet)[1903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:46:22.282812 kubelet[1903]: E1108 00:46:22.282746 1903 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:46:22.290444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:46:22.290692 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:46:23.340526 containerd[1471]: time="2025-11-08T00:46:23.340188169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:23.343712 containerd[1471]: time="2025-11-08T00:46:23.342224937Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 8 00:46:23.352016 containerd[1471]: time="2025-11-08T00:46:23.351942277Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:23.359664 containerd[1471]: time="2025-11-08T00:46:23.359310540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:23.361116 containerd[1471]: time="2025-11-08T00:46:23.360343669Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 3.557920562s" Nov 8 00:46:23.361116 containerd[1471]: time="2025-11-08T00:46:23.360481738Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:46:23.364884 containerd[1471]: time="2025-11-08T00:46:23.364601544Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:46:25.799079 containerd[1471]: time="2025-11-08T00:46:25.798952200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:25.800477 containerd[1471]: time="2025-11-08T00:46:25.800409788Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 8 00:46:25.801328 containerd[1471]: time="2025-11-08T00:46:25.801291228Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:25.804182 containerd[1471]: time="2025-11-08T00:46:25.804041895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:25.807167 containerd[1471]: time="2025-11-08T00:46:25.806274153Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.441632469s" Nov 8 00:46:25.807167 containerd[1471]: time="2025-11-08T00:46:25.806351863Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:46:25.811856 containerd[1471]: time="2025-11-08T00:46:25.811816437Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:46:27.554301 containerd[1471]: time="2025-11-08T00:46:27.552707976Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:27.554301 containerd[1471]: time="2025-11-08T00:46:27.553231596Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 8 00:46:27.556947 containerd[1471]: time="2025-11-08T00:46:27.555922533Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:27.561213 containerd[1471]: time="2025-11-08T00:46:27.561132098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:27.563178 containerd[1471]: time="2025-11-08T00:46:27.562795316Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.750799599s" Nov 8 00:46:27.563178 containerd[1471]: time="2025-11-08T00:46:27.563078186Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:46:27.565506 containerd[1471]: time="2025-11-08T00:46:27.565463643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:46:29.366859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3914631500.mount: Deactivated successfully. Nov 8 00:46:30.162759 containerd[1471]: time="2025-11-08T00:46:30.162681756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:30.164645 containerd[1471]: time="2025-11-08T00:46:30.164592814Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 8 00:46:30.165595 containerd[1471]: time="2025-11-08T00:46:30.165566043Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:30.168689 containerd[1471]: time="2025-11-08T00:46:30.167696341Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.602185218s" Nov 8 00:46:30.168689 containerd[1471]: time="2025-11-08T00:46:30.167739271Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:46:30.168689 containerd[1471]: time="2025-11-08T00:46:30.167971001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:30.171910 containerd[1471]: time="2025-11-08T00:46:30.170996038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:46:30.813423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1553076972.mount: Deactivated successfully. Nov 8 00:46:32.340554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:46:32.364214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:32.752857 containerd[1471]: time="2025-11-08T00:46:32.752204107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:32.753944 containerd[1471]: time="2025-11-08T00:46:32.753866895Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 8 00:46:32.754114 containerd[1471]: time="2025-11-08T00:46:32.754067805Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:32.759288 containerd[1471]: time="2025-11-08T00:46:32.758466670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:32.760417 containerd[1471]: time="2025-11-08T00:46:32.760384948Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.58934283s" Nov 8 00:46:32.760912 containerd[1471]: time="2025-11-08T00:46:32.760892468Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:46:32.762909 containerd[1471]: time="2025-11-08T00:46:32.762869336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:46:32.785006 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:32.796504 (kubelet)[1985]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:46:33.072688 kubelet[1985]: E1108 00:46:33.072171 1985 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:46:33.076577 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:46:33.076823 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:46:33.361499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount975845594.mount: Deactivated successfully. Nov 8 00:46:33.366069 containerd[1471]: time="2025-11-08T00:46:33.366020353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:33.367233 containerd[1471]: time="2025-11-08T00:46:33.366984232Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 8 00:46:33.367946 containerd[1471]: time="2025-11-08T00:46:33.367867871Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:33.371850 containerd[1471]: time="2025-11-08T00:46:33.370870618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:33.371850 containerd[1471]: time="2025-11-08T00:46:33.371734407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 608.560511ms" Nov 8 00:46:33.371850 containerd[1471]: time="2025-11-08T00:46:33.371770407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:46:33.373273 containerd[1471]: time="2025-11-08T00:46:33.373249186Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:46:38.022062 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 00:46:38.444171 containerd[1471]: time="2025-11-08T00:46:38.442781826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:38.444171 containerd[1471]: time="2025-11-08T00:46:38.444054875Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 8 00:46:38.445582 containerd[1471]: time="2025-11-08T00:46:38.445558843Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:38.449576 containerd[1471]: time="2025-11-08T00:46:38.449543779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:46:38.451225 containerd[1471]: time="2025-11-08T00:46:38.451179198Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.077896002s" Nov 8 00:46:38.451312 containerd[1471]: time="2025-11-08T00:46:38.451288117Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:46:42.590878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:42.610369 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:42.651255 systemd[1]: Reloading requested from client PID 2069 ('systemctl') (unit session-7.scope)... Nov 8 00:46:42.651458 systemd[1]: Reloading... Nov 8 00:46:42.840166 zram_generator::config[2108]: No configuration found. Nov 8 00:46:42.984881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:46:43.070069 systemd[1]: Reloading finished in 418 ms. Nov 8 00:46:43.134656 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:46:43.134782 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:46:43.135084 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:43.137066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:43.362305 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:43.373934 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:46:43.463065 kubelet[2162]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:46:43.463591 kubelet[2162]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:46:43.463845 kubelet[2162]: I1108 00:46:43.463796 2162 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:46:43.881612 kubelet[2162]: I1108 00:46:43.881543 2162 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:46:43.881612 kubelet[2162]: I1108 00:46:43.881590 2162 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:46:43.882468 kubelet[2162]: I1108 00:46:43.882444 2162 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:46:43.882534 kubelet[2162]: I1108 00:46:43.882472 2162 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:46:43.882785 kubelet[2162]: I1108 00:46:43.882763 2162 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:46:43.888967 kubelet[2162]: E1108 00:46:43.888889 2162 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.57.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:46:43.890010 kubelet[2162]: I1108 00:46:43.889114 2162 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:46:43.897515 kubelet[2162]: E1108 00:46:43.897483 2162 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:46:43.897632 kubelet[2162]: I1108 00:46:43.897536 2162 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:46:43.905927 kubelet[2162]: I1108 00:46:43.904361 2162 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:46:43.905927 kubelet[2162]: I1108 00:46:43.904875 2162 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:46:43.905927 kubelet[2162]: I1108 00:46:43.904901 2162 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-57-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:46:43.905927 kubelet[2162]: I1108 00:46:43.905256 2162 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:46:43.906593 kubelet[2162]: I1108 00:46:43.905268 2162 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:46:43.906593 kubelet[2162]: I1108 00:46:43.905646 2162 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:46:43.909510 kubelet[2162]: I1108 00:46:43.909472 2162 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:46:43.911301 kubelet[2162]: I1108 00:46:43.911267 2162 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:46:43.914691 kubelet[2162]: I1108 00:46:43.911775 2162 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:46:43.914691 kubelet[2162]: I1108 00:46:43.911862 2162 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:46:43.914691 kubelet[2162]: I1108 00:46:43.911908 2162 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:46:43.914691 kubelet[2162]: E1108 00:46:43.912104 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.57.65:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-57-65&limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:46:43.914691 kubelet[2162]: E1108 00:46:43.913995 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.57.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:46:43.917382 kubelet[2162]: I1108 00:46:43.917113 2162 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:46:43.918515 kubelet[2162]: I1108 00:46:43.917850 2162 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:46:43.918515 kubelet[2162]: I1108 00:46:43.917881 2162 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:46:43.918515 kubelet[2162]: W1108 00:46:43.918021 2162 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:46:43.924180 kubelet[2162]: I1108 00:46:43.923735 2162 server.go:1262] "Started kubelet" Nov 8 00:46:43.925037 kubelet[2162]: I1108 00:46:43.925002 2162 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:46:43.933671 kubelet[2162]: E1108 00:46:43.931413 2162 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.57.65:6443/api/v1/namespaces/default/events\": dial tcp 172.239.57.65:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-57-65.1875e1881e2fdf53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-57-65,UID:172-239-57-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-57-65,},FirstTimestamp:2025-11-08 00:46:43.923672915 +0000 UTC m=+0.536268205,LastTimestamp:2025-11-08 00:46:43.923672915 +0000 UTC m=+0.536268205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-57-65,}" Nov 8 00:46:43.934517 kubelet[2162]: I1108 00:46:43.934090 2162 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:46:43.936183 kubelet[2162]: I1108 00:46:43.936110 2162 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:46:43.937741 kubelet[2162]: I1108 00:46:43.937713 2162 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:46:43.938014 kubelet[2162]: E1108 00:46:43.937982 2162 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"172-239-57-65\" not found" Nov 8 00:46:43.939282 kubelet[2162]: I1108 00:46:43.938320 2162 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:46:43.939282 kubelet[2162]: I1108 00:46:43.938395 2162 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:46:43.943338 kubelet[2162]: I1108 00:46:43.942864 2162 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:46:43.943338 kubelet[2162]: I1108 00:46:43.942915 2162 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:46:43.943510 kubelet[2162]: I1108 00:46:43.943494 2162 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:46:43.944775 kubelet[2162]: I1108 00:46:43.944724 2162 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:46:43.950545 kubelet[2162]: E1108 00:46:43.949633 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.57.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:46:43.950545 kubelet[2162]: E1108 00:46:43.950010 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-65?timeout=10s\": dial tcp 172.239.57.65:6443: connect: connection refused" interval="200ms" Nov 8 00:46:43.951688 kubelet[2162]: E1108 00:46:43.951658 2162 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:46:43.952205 kubelet[2162]: I1108 00:46:43.952177 2162 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:46:43.952205 kubelet[2162]: I1108 00:46:43.952197 2162 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:46:43.952330 kubelet[2162]: I1108 00:46:43.952275 2162 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:46:43.974088 kubelet[2162]: I1108 00:46:43.974011 2162 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:46:43.976623 kubelet[2162]: I1108 00:46:43.976601 2162 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:46:43.976743 kubelet[2162]: I1108 00:46:43.976729 2162 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:46:43.976871 kubelet[2162]: I1108 00:46:43.976859 2162 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:46:43.977026 kubelet[2162]: E1108 00:46:43.976986 2162 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:46:43.980587 kubelet[2162]: E1108 00:46:43.980563 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.57.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:46:43.991047 kubelet[2162]: I1108 00:46:43.991027 2162 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:46:43.991360 kubelet[2162]: I1108 00:46:43.991345 2162 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:46:43.991475 kubelet[2162]: I1108 00:46:43.991462 2162 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:46:43.993425 kubelet[2162]: I1108 00:46:43.993406 2162 policy_none.go:49] "None policy: Start" Nov 8 00:46:43.993733 kubelet[2162]: I1108 00:46:43.993520 2162 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:46:43.993807 kubelet[2162]: I1108 00:46:43.993794 2162 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:46:43.994744 kubelet[2162]: I1108 00:46:43.994730 2162 policy_none.go:47] "Start" Nov 8 00:46:44.001283 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:46:44.017929 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:46:44.022226 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:46:44.030725 kubelet[2162]: E1108 00:46:44.030509 2162 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:46:44.030806 kubelet[2162]: I1108 00:46:44.030766 2162 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:46:44.030806 kubelet[2162]: I1108 00:46:44.030785 2162 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:46:44.032010 kubelet[2162]: I1108 00:46:44.031297 2162 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:46:44.033793 kubelet[2162]: E1108 00:46:44.033466 2162 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:46:44.033793 kubelet[2162]: E1108 00:46:44.033734 2162 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-239-57-65\" not found" Nov 8 00:46:44.090913 systemd[1]: Created slice kubepods-burstable-podbfcff075dba11aa230dac714bc1f2ff7.slice - libcontainer container kubepods-burstable-podbfcff075dba11aa230dac714bc1f2ff7.slice. Nov 8 00:46:44.107913 kubelet[2162]: E1108 00:46:44.107884 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:44.111291 systemd[1]: Created slice kubepods-burstable-pod27f6e18475da51437218c0ceec5276cb.slice - libcontainer container kubepods-burstable-pod27f6e18475da51437218c0ceec5276cb.slice. Nov 8 00:46:44.113738 kubelet[2162]: E1108 00:46:44.113718 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:44.125548 systemd[1]: Created slice kubepods-burstable-poded591b0a60ab1b03f3a98b0ec76fd6bf.slice - libcontainer container kubepods-burstable-poded591b0a60ab1b03f3a98b0ec76fd6bf.slice. Nov 8 00:46:44.127508 kubelet[2162]: E1108 00:46:44.127479 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:44.133077 kubelet[2162]: I1108 00:46:44.132573 2162 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-65" Nov 8 00:46:44.133077 kubelet[2162]: E1108 00:46:44.132989 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.65:6443/api/v1/nodes\": dial tcp 172.239.57.65:6443: connect: connection refused" node="172-239-57-65" Nov 8 00:46:44.150726 kubelet[2162]: E1108 00:46:44.150697 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-65?timeout=10s\": dial tcp 172.239.57.65:6443: connect: connection refused" interval="400ms" Nov 8 00:46:44.240541 kubelet[2162]: I1108 00:46:44.240488 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed591b0a60ab1b03f3a98b0ec76fd6bf-kubeconfig\") pod \"kube-scheduler-172-239-57-65\" (UID: \"ed591b0a60ab1b03f3a98b0ec76fd6bf\") " pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:44.240541 kubelet[2162]: I1108 00:46:44.240536 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfcff075dba11aa230dac714bc1f2ff7-k8s-certs\") pod \"kube-apiserver-172-239-57-65\" (UID: \"bfcff075dba11aa230dac714bc1f2ff7\") " pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:44.240541 kubelet[2162]: I1108 00:46:44.240553 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-flexvolume-dir\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:44.240872 kubelet[2162]: I1108 00:46:44.240568 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-k8s-certs\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:44.240872 kubelet[2162]: I1108 00:46:44.240586 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:44.240872 kubelet[2162]: I1108 00:46:44.240602 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfcff075dba11aa230dac714bc1f2ff7-ca-certs\") pod \"kube-apiserver-172-239-57-65\" (UID: \"bfcff075dba11aa230dac714bc1f2ff7\") " pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:44.240872 kubelet[2162]: I1108 00:46:44.240616 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfcff075dba11aa230dac714bc1f2ff7-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-57-65\" (UID: \"bfcff075dba11aa230dac714bc1f2ff7\") " pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:44.240872 kubelet[2162]: I1108 00:46:44.240629 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-ca-certs\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:44.241020 kubelet[2162]: I1108 00:46:44.240647 2162 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-kubeconfig\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:44.334898 kubelet[2162]: I1108 00:46:44.334860 2162 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-65" Nov 8 00:46:44.335427 kubelet[2162]: E1108 00:46:44.335397 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.65:6443/api/v1/nodes\": dial tcp 172.239.57.65:6443: connect: connection refused" node="172-239-57-65" Nov 8 00:46:44.410852 kubelet[2162]: E1108 00:46:44.410746 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:44.412171 containerd[1471]: time="2025-11-08T00:46:44.412047657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-57-65,Uid:bfcff075dba11aa230dac714bc1f2ff7,Namespace:kube-system,Attempt:0,}" Nov 8 00:46:44.418484 kubelet[2162]: E1108 00:46:44.418463 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:44.419049 containerd[1471]: time="2025-11-08T00:46:44.418998680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-57-65,Uid:27f6e18475da51437218c0ceec5276cb,Namespace:kube-system,Attempt:0,}" Nov 8 00:46:44.430005 kubelet[2162]: E1108 00:46:44.429814 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:44.430176 containerd[1471]: time="2025-11-08T00:46:44.430132079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-57-65,Uid:ed591b0a60ab1b03f3a98b0ec76fd6bf,Namespace:kube-system,Attempt:0,}" Nov 8 00:46:44.551989 kubelet[2162]: E1108 00:46:44.551946 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-65?timeout=10s\": dial tcp 172.239.57.65:6443: connect: connection refused" interval="800ms" Nov 8 00:46:44.737716 kubelet[2162]: I1108 00:46:44.737566 2162 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-65" Nov 8 00:46:44.737983 kubelet[2162]: E1108 00:46:44.737960 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.65:6443/api/v1/nodes\": dial tcp 172.239.57.65:6443: connect: connection refused" node="172-239-57-65" Nov 8 00:46:44.806017 kubelet[2162]: E1108 00:46:44.805949 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.239.57.65:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:46:44.872321 kubelet[2162]: E1108 00:46:44.872186 2162 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.239.57.65:6443/api/v1/namespaces/default/events\": dial tcp 172.239.57.65:6443: connect: connection refused" event="&Event{ObjectMeta:{172-239-57-65.1875e1881e2fdf53 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-239-57-65,UID:172-239-57-65,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-239-57-65,},FirstTimestamp:2025-11-08 00:46:43.923672915 +0000 UTC m=+0.536268205,LastTimestamp:2025-11-08 00:46:43.923672915 +0000 UTC m=+0.536268205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-239-57-65,}" Nov 8 00:46:44.918585 kubelet[2162]: E1108 00:46:44.918519 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.239.57.65:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-239-57-65&limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:46:45.009676 kubelet[2162]: E1108 00:46:45.009564 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.239.57.65:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:46:45.028255 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1062443403.mount: Deactivated successfully. Nov 8 00:46:45.029450 containerd[1471]: time="2025-11-08T00:46:45.029166350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:46:45.031000 containerd[1471]: time="2025-11-08T00:46:45.030972418Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:46:45.031977 containerd[1471]: time="2025-11-08T00:46:45.031942757Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Nov 8 00:46:45.032593 containerd[1471]: time="2025-11-08T00:46:45.032566386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:46:45.033588 containerd[1471]: time="2025-11-08T00:46:45.033558655Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:46:45.035758 containerd[1471]: time="2025-11-08T00:46:45.034998894Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:46:45.035758 containerd[1471]: time="2025-11-08T00:46:45.035703233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:46:45.036715 containerd[1471]: time="2025-11-08T00:46:45.036669402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:46:45.039288 containerd[1471]: time="2025-11-08T00:46:45.039246109Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 620.106639ms" Nov 8 00:46:45.041454 containerd[1471]: time="2025-11-08T00:46:45.041415847Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 629.114891ms" Nov 8 00:46:45.043979 containerd[1471]: time="2025-11-08T00:46:45.043843865Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 613.638126ms" Nov 8 00:46:45.334592 kubelet[2162]: E1108 00:46:45.334552 2162 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.239.57.65:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:46:45.355645 kubelet[2162]: E1108 00:46:45.353390 2162 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.239.57.65:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-239-57-65?timeout=10s\": dial tcp 172.239.57.65:6443: connect: connection refused" interval="1.6s" Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.541486217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.541538877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.541552427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.541631027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.532662696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.532728776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.532752856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:46:45.543020 containerd[1471]: time="2025-11-08T00:46:45.532857576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:46:45.552098 kubelet[2162]: I1108 00:46:45.549737 2162 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-65" Nov 8 00:46:45.552098 kubelet[2162]: E1108 00:46:45.550073 2162 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.239.57.65:6443/api/v1/nodes\": dial tcp 172.239.57.65:6443: connect: connection refused" node="172-239-57-65" Nov 8 00:46:45.650463 containerd[1471]: time="2025-11-08T00:46:45.627928791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:46:45.650463 containerd[1471]: time="2025-11-08T00:46:45.628030481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:46:45.650463 containerd[1471]: time="2025-11-08T00:46:45.628052761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:46:45.650463 containerd[1471]: time="2025-11-08T00:46:45.628192951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:46:45.749293 systemd[1]: Started cri-containerd-7cabd81934c05585176d9950aceac7f7f4ec1eb574673730693bf9d0c2997069.scope - libcontainer container 7cabd81934c05585176d9950aceac7f7f4ec1eb574673730693bf9d0c2997069. Nov 8 00:46:45.805076 systemd[1]: Started cri-containerd-b13f15be4d710fb87f3b90e4dc88739fea6fe22c9b736728371860692b212610.scope - libcontainer container b13f15be4d710fb87f3b90e4dc88739fea6fe22c9b736728371860692b212610. Nov 8 00:46:45.924326 systemd[1]: Started cri-containerd-d6b6240923ab3ead020e2117b712f77a42c52c3a12045b9bdd6fd66d4acc99b3.scope - libcontainer container d6b6240923ab3ead020e2117b712f77a42c52c3a12045b9bdd6fd66d4acc99b3. Nov 8 00:46:45.931169 kubelet[2162]: E1108 00:46:45.930983 2162 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.239.57.65:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.239.57.65:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:46:46.046679 containerd[1471]: time="2025-11-08T00:46:46.045903423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-239-57-65,Uid:ed591b0a60ab1b03f3a98b0ec76fd6bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cabd81934c05585176d9950aceac7f7f4ec1eb574673730693bf9d0c2997069\"" Nov 8 00:46:46.049083 kubelet[2162]: E1108 00:46:46.048396 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:46.054073 containerd[1471]: time="2025-11-08T00:46:46.054044725Z" level=info msg="CreateContainer within sandbox \"7cabd81934c05585176d9950aceac7f7f4ec1eb574673730693bf9d0c2997069\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:46:46.066121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063840981.mount: Deactivated successfully. Nov 8 00:46:46.089521 containerd[1471]: time="2025-11-08T00:46:46.088508850Z" level=info msg="CreateContainer within sandbox \"7cabd81934c05585176d9950aceac7f7f4ec1eb574673730693bf9d0c2997069\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e35f51f5977b72a8b6606215abba6d340a5cc6577f2b81083a5cfabff5af0de6\"" Nov 8 00:46:46.089521 containerd[1471]: time="2025-11-08T00:46:46.089271109Z" level=info msg="StartContainer for \"e35f51f5977b72a8b6606215abba6d340a5cc6577f2b81083a5cfabff5af0de6\"" Nov 8 00:46:46.140881 containerd[1471]: time="2025-11-08T00:46:46.140815148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-239-57-65,Uid:27f6e18475da51437218c0ceec5276cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b13f15be4d710fb87f3b90e4dc88739fea6fe22c9b736728371860692b212610\"" Nov 8 00:46:46.142724 kubelet[2162]: E1108 00:46:46.142703 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:46.147000 containerd[1471]: time="2025-11-08T00:46:46.146847302Z" level=info msg="CreateContainer within sandbox \"b13f15be4d710fb87f3b90e4dc88739fea6fe22c9b736728371860692b212610\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:46:46.151591 containerd[1471]: time="2025-11-08T00:46:46.151567327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-239-57-65,Uid:bfcff075dba11aa230dac714bc1f2ff7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6b6240923ab3ead020e2117b712f77a42c52c3a12045b9bdd6fd66d4acc99b3\"" Nov 8 00:46:46.153080 kubelet[2162]: E1108 00:46:46.152877 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:46.157952 containerd[1471]: time="2025-11-08T00:46:46.157667741Z" level=info msg="CreateContainer within sandbox \"d6b6240923ab3ead020e2117b712f77a42c52c3a12045b9bdd6fd66d4acc99b3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:46:46.177580 containerd[1471]: time="2025-11-08T00:46:46.177233911Z" level=info msg="CreateContainer within sandbox \"b13f15be4d710fb87f3b90e4dc88739fea6fe22c9b736728371860692b212610\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"79b6bf4efabac7d218df1a9278e72566d44154c9b8844b435412706343a02542\"" Nov 8 00:46:46.180874 containerd[1471]: time="2025-11-08T00:46:46.180833068Z" level=info msg="StartContainer for \"79b6bf4efabac7d218df1a9278e72566d44154c9b8844b435412706343a02542\"" Nov 8 00:46:46.190548 containerd[1471]: time="2025-11-08T00:46:46.190505498Z" level=info msg="CreateContainer within sandbox \"d6b6240923ab3ead020e2117b712f77a42c52c3a12045b9bdd6fd66d4acc99b3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"258fedb988251a1ea6bc7699486d38cebb70d8ff40e799c4e65b553adba946d3\"" Nov 8 00:46:46.191328 systemd[1]: Started cri-containerd-e35f51f5977b72a8b6606215abba6d340a5cc6577f2b81083a5cfabff5af0de6.scope - libcontainer container e35f51f5977b72a8b6606215abba6d340a5cc6577f2b81083a5cfabff5af0de6. Nov 8 00:46:46.192014 containerd[1471]: time="2025-11-08T00:46:46.191346987Z" level=info msg="StartContainer for \"258fedb988251a1ea6bc7699486d38cebb70d8ff40e799c4e65b553adba946d3\"" Nov 8 00:46:46.282295 systemd[1]: Started cri-containerd-79b6bf4efabac7d218df1a9278e72566d44154c9b8844b435412706343a02542.scope - libcontainer container 79b6bf4efabac7d218df1a9278e72566d44154c9b8844b435412706343a02542. Nov 8 00:46:46.317832 containerd[1471]: time="2025-11-08T00:46:46.317790421Z" level=info msg="StartContainer for \"e35f51f5977b72a8b6606215abba6d340a5cc6577f2b81083a5cfabff5af0de6\" returns successfully" Nov 8 00:46:46.360454 systemd[1]: Started cri-containerd-258fedb988251a1ea6bc7699486d38cebb70d8ff40e799c4e65b553adba946d3.scope - libcontainer container 258fedb988251a1ea6bc7699486d38cebb70d8ff40e799c4e65b553adba946d3. Nov 8 00:46:46.547465 containerd[1471]: time="2025-11-08T00:46:46.546659612Z" level=info msg="StartContainer for \"79b6bf4efabac7d218df1a9278e72566d44154c9b8844b435412706343a02542\" returns successfully" Nov 8 00:46:46.551515 containerd[1471]: time="2025-11-08T00:46:46.551266327Z" level=info msg="StartContainer for \"258fedb988251a1ea6bc7699486d38cebb70d8ff40e799c4e65b553adba946d3\" returns successfully" Nov 8 00:46:47.020169 kubelet[2162]: E1108 00:46:47.015913 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:47.026191 kubelet[2162]: E1108 00:46:47.026056 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:47.026851 kubelet[2162]: E1108 00:46:47.026658 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:47.026851 kubelet[2162]: E1108 00:46:47.026764 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:47.029320 kubelet[2162]: E1108 00:46:47.029304 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:47.029560 kubelet[2162]: E1108 00:46:47.029497 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:47.164211 kubelet[2162]: I1108 00:46:47.152303 2162 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-65" Nov 8 00:46:48.097288 kubelet[2162]: E1108 00:46:48.081497 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:48.097288 kubelet[2162]: E1108 00:46:48.081783 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:48.097288 kubelet[2162]: E1108 00:46:48.081976 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:48.097288 kubelet[2162]: E1108 00:46:48.082098 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:49.204489 kubelet[2162]: E1108 00:46:49.203796 2162 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:49.204489 kubelet[2162]: E1108 00:46:49.204176 2162 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:49.618778 update_engine[1443]: I20251108 00:46:49.618500 1443 update_attempter.cc:509] Updating boot flags... Nov 8 00:46:49.931610 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2455) Nov 8 00:46:50.178182 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2456) Nov 8 00:46:50.421415 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2456) Nov 8 00:46:51.068647 kubelet[2162]: E1108 00:46:51.068607 2162 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-239-57-65\" not found" node="172-239-57-65" Nov 8 00:46:51.125314 kubelet[2162]: I1108 00:46:51.125279 2162 kubelet_node_status.go:78] "Successfully registered node" node="172-239-57-65" Nov 8 00:46:51.125582 kubelet[2162]: E1108 00:46:51.125501 2162 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"172-239-57-65\": node \"172-239-57-65\" not found" Nov 8 00:46:51.162294 kubelet[2162]: I1108 00:46:51.161040 2162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:51.189945 kubelet[2162]: E1108 00:46:51.189913 2162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-57-65\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:51.190077 kubelet[2162]: I1108 00:46:51.190066 2162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:51.197868 kubelet[2162]: I1108 00:46:51.197705 2162 apiserver.go:52] "Watching apiserver" Nov 8 00:46:51.238745 kubelet[2162]: E1108 00:46:51.238718 2162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-57-65\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:51.239222 kubelet[2162]: I1108 00:46:51.239206 2162 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:51.243264 kubelet[2162]: E1108 00:46:51.243210 2162 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-57-65\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:51.352794 kubelet[2162]: I1108 00:46:51.352709 2162 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:46:52.905874 systemd[1]: Reloading requested from client PID 2468 ('systemctl') (unit session-7.scope)... Nov 8 00:46:52.905931 systemd[1]: Reloading... Nov 8 00:46:53.194195 zram_generator::config[2516]: No configuration found. Nov 8 00:46:53.275400 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:46:53.368578 systemd[1]: Reloading finished in 461 ms. Nov 8 00:46:53.442500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:53.461924 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:46:53.462414 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:53.462523 systemd[1]: kubelet.service: Consumed 1.190s CPU time, 125.5M memory peak, 0B memory swap peak. Nov 8 00:46:53.479793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:46:53.832669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:46:53.846931 (kubelet)[2558]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:46:53.927176 kubelet[2558]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:46:53.927176 kubelet[2558]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:46:53.927176 kubelet[2558]: I1108 00:46:53.926554 2558 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:46:53.940654 kubelet[2558]: I1108 00:46:53.940596 2558 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:46:53.940654 kubelet[2558]: I1108 00:46:53.940620 2558 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:46:53.940760 kubelet[2558]: I1108 00:46:53.940676 2558 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:46:53.940760 kubelet[2558]: I1108 00:46:53.940694 2558 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:46:53.941625 kubelet[2558]: I1108 00:46:53.941085 2558 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:46:53.943167 kubelet[2558]: I1108 00:46:53.942634 2558 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:46:53.945652 kubelet[2558]: I1108 00:46:53.945478 2558 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:46:53.951463 kubelet[2558]: E1108 00:46:53.951417 2558 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:46:53.951521 kubelet[2558]: I1108 00:46:53.951467 2558 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:46:53.957289 kubelet[2558]: I1108 00:46:53.957257 2558 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:46:53.958266 kubelet[2558]: I1108 00:46:53.957634 2558 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:46:53.958266 kubelet[2558]: I1108 00:46:53.957667 2558 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-239-57-65","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:46:53.958266 kubelet[2558]: I1108 00:46:53.957853 2558 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:46:53.958266 kubelet[2558]: I1108 00:46:53.957863 2558 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:46:53.958624 kubelet[2558]: I1108 00:46:53.957907 2558 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:46:53.959105 kubelet[2558]: I1108 00:46:53.959056 2558 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:46:53.959784 kubelet[2558]: I1108 00:46:53.959333 2558 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:46:53.959784 kubelet[2558]: I1108 00:46:53.959356 2558 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:46:53.961359 kubelet[2558]: I1108 00:46:53.961287 2558 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:46:53.961359 kubelet[2558]: I1108 00:46:53.961328 2558 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:46:53.962846 kubelet[2558]: I1108 00:46:53.962536 2558 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:46:53.964882 kubelet[2558]: I1108 00:46:53.964483 2558 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:46:53.964882 kubelet[2558]: I1108 00:46:53.964514 2558 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:46:53.986196 kubelet[2558]: I1108 00:46:53.976160 2558 server.go:1262] "Started kubelet" Nov 8 00:46:53.986196 kubelet[2558]: I1108 00:46:53.984317 2558 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:46:53.986196 kubelet[2558]: I1108 00:46:53.985446 2558 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:46:53.998167 kubelet[2558]: I1108 00:46:53.997049 2558 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:46:53.998167 kubelet[2558]: I1108 00:46:53.997168 2558 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:46:53.998167 kubelet[2558]: I1108 00:46:53.997632 2558 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:46:54.001155 kubelet[2558]: I1108 00:46:53.999922 2558 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:46:54.041920 kubelet[2558]: E1108 00:46:54.041888 2558 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:46:54.044674 kubelet[2558]: I1108 00:46:54.044328 2558 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:46:54.050489 kubelet[2558]: I1108 00:46:54.050471 2558 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:46:54.051286 kubelet[2558]: I1108 00:46:54.051270 2558 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:46:54.051634 kubelet[2558]: I1108 00:46:54.051619 2558 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:46:54.053183 kubelet[2558]: I1108 00:46:54.053164 2558 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:46:54.053358 kubelet[2558]: I1108 00:46:54.053338 2558 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:46:54.058359 kubelet[2558]: I1108 00:46:54.057911 2558 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:46:54.068974 kubelet[2558]: I1108 00:46:54.068906 2558 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:46:54.071022 kubelet[2558]: I1108 00:46:54.070984 2558 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:46:54.071093 kubelet[2558]: I1108 00:46:54.071029 2558 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:46:54.071093 kubelet[2558]: I1108 00:46:54.071074 2558 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:46:54.071643 kubelet[2558]: E1108 00:46:54.071573 2558 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:46:54.161936 kubelet[2558]: I1108 00:46:54.161009 2558 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:46:54.161936 kubelet[2558]: I1108 00:46:54.161028 2558 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:46:54.161936 kubelet[2558]: I1108 00:46:54.161055 2558 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163613 2558 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163639 2558 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163672 2558 policy_none.go:49] "None policy: Start" Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163710 2558 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163734 2558 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163865 2558 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:46:54.164484 kubelet[2558]: I1108 00:46:54.163894 2558 policy_none.go:47] "Start" Nov 8 00:46:54.172687 kubelet[2558]: E1108 00:46:54.172222 2558 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:46:54.172979 kubelet[2558]: E1108 00:46:54.172949 2558 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:46:54.173244 kubelet[2558]: I1108 00:46:54.173220 2558 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:46:54.173297 kubelet[2558]: I1108 00:46:54.173251 2558 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:46:54.174548 kubelet[2558]: I1108 00:46:54.173881 2558 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:46:54.179567 kubelet[2558]: E1108 00:46:54.179512 2558 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:46:54.279746 kubelet[2558]: I1108 00:46:54.279711 2558 kubelet_node_status.go:75] "Attempting to register node" node="172-239-57-65" Nov 8 00:46:54.290564 kubelet[2558]: I1108 00:46:54.290543 2558 kubelet_node_status.go:124] "Node was previously registered" node="172-239-57-65" Nov 8 00:46:54.291308 kubelet[2558]: I1108 00:46:54.290833 2558 kubelet_node_status.go:78] "Successfully registered node" node="172-239-57-65" Nov 8 00:46:54.374370 kubelet[2558]: I1108 00:46:54.374170 2558 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:54.374370 kubelet[2558]: I1108 00:46:54.374354 2558 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:54.376294 kubelet[2558]: I1108 00:46:54.374718 2558 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:54.453362 kubelet[2558]: I1108 00:46:54.452719 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-usr-share-ca-certificates\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:54.453362 kubelet[2558]: I1108 00:46:54.452978 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfcff075dba11aa230dac714bc1f2ff7-k8s-certs\") pod \"kube-apiserver-172-239-57-65\" (UID: \"bfcff075dba11aa230dac714bc1f2ff7\") " pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:54.453362 kubelet[2558]: I1108 00:46:54.452999 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-flexvolume-dir\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:54.453362 kubelet[2558]: I1108 00:46:54.453018 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ed591b0a60ab1b03f3a98b0ec76fd6bf-kubeconfig\") pod \"kube-scheduler-172-239-57-65\" (UID: \"ed591b0a60ab1b03f3a98b0ec76fd6bf\") " pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:54.453362 kubelet[2558]: I1108 00:46:54.453037 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfcff075dba11aa230dac714bc1f2ff7-ca-certs\") pod \"kube-apiserver-172-239-57-65\" (UID: \"bfcff075dba11aa230dac714bc1f2ff7\") " pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:54.453632 kubelet[2558]: I1108 00:46:54.453054 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfcff075dba11aa230dac714bc1f2ff7-usr-share-ca-certificates\") pod \"kube-apiserver-172-239-57-65\" (UID: \"bfcff075dba11aa230dac714bc1f2ff7\") " pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:54.453632 kubelet[2558]: I1108 00:46:54.453068 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-ca-certs\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:54.453632 kubelet[2558]: I1108 00:46:54.453081 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-k8s-certs\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:54.453632 kubelet[2558]: I1108 00:46:54.453098 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27f6e18475da51437218c0ceec5276cb-kubeconfig\") pod \"kube-controller-manager-172-239-57-65\" (UID: \"27f6e18475da51437218c0ceec5276cb\") " pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:54.686345 kubelet[2558]: E1108 00:46:54.684053 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:54.686345 kubelet[2558]: E1108 00:46:54.684313 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:54.719324 kubelet[2558]: E1108 00:46:54.718692 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:54.963925 kubelet[2558]: I1108 00:46:54.963886 2558 apiserver.go:52] "Watching apiserver" Nov 8 00:46:55.052366 kubelet[2558]: I1108 00:46:55.052242 2558 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:46:55.114218 kubelet[2558]: I1108 00:46:55.113503 2558 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:55.115023 kubelet[2558]: I1108 00:46:55.114529 2558 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:55.115023 kubelet[2558]: I1108 00:46:55.114704 2558 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:55.122346 kubelet[2558]: E1108 00:46:55.122279 2558 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-239-57-65\" already exists" pod="kube-system/kube-scheduler-172-239-57-65" Nov 8 00:46:55.123813 kubelet[2558]: E1108 00:46:55.123061 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:55.124884 kubelet[2558]: E1108 00:46:55.124859 2558 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-239-57-65\" already exists" pod="kube-system/kube-controller-manager-172-239-57-65" Nov 8 00:46:55.125267 kubelet[2558]: E1108 00:46:55.125249 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:55.125631 kubelet[2558]: E1108 00:46:55.125131 2558 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-239-57-65\" already exists" pod="kube-system/kube-apiserver-172-239-57-65" Nov 8 00:46:55.125984 kubelet[2558]: E1108 00:46:55.125883 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:55.162795 kubelet[2558]: I1108 00:46:55.162479 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-239-57-65" podStartSLOduration=1.162448591 podStartE2EDuration="1.162448591s" podCreationTimestamp="2025-11-08 00:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:46:55.147885893 +0000 UTC m=+1.290441493" watchObservedRunningTime="2025-11-08 00:46:55.162448591 +0000 UTC m=+1.305004191" Nov 8 00:46:55.193559 kubelet[2558]: I1108 00:46:55.193287 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-239-57-65" podStartSLOduration=1.193273348 podStartE2EDuration="1.193273348s" podCreationTimestamp="2025-11-08 00:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:46:55.190873601 +0000 UTC m=+1.333429201" watchObservedRunningTime="2025-11-08 00:46:55.193273348 +0000 UTC m=+1.335828948" Nov 8 00:46:55.194350 kubelet[2558]: I1108 00:46:55.193451 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-239-57-65" podStartSLOduration=1.193447134 podStartE2EDuration="1.193447134s" podCreationTimestamp="2025-11-08 00:46:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:46:55.162714161 +0000 UTC m=+1.305269761" watchObservedRunningTime="2025-11-08 00:46:55.193447134 +0000 UTC m=+1.336002734" Nov 8 00:46:56.115855 kubelet[2558]: E1108 00:46:56.115785 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:56.116761 kubelet[2558]: E1108 00:46:56.116734 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:56.117503 kubelet[2558]: E1108 00:46:56.117460 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:57.117832 kubelet[2558]: E1108 00:46:57.117791 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:58.737377 kubelet[2558]: E1108 00:46:58.737131 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:46:59.122127 kubelet[2558]: E1108 00:46:59.121722 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:00.238531 kubelet[2558]: I1108 00:47:00.238471 2558 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:47:00.240417 containerd[1471]: time="2025-11-08T00:47:00.239570758Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:47:00.241862 kubelet[2558]: I1108 00:47:00.241306 2558 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:47:01.137542 systemd[1]: Created slice kubepods-besteffort-podadb7bc91_4bbf_48f8_9c33_71efd8a1f16d.slice - libcontainer container kubepods-besteffort-podadb7bc91_4bbf_48f8_9c33_71efd8a1f16d.slice. Nov 8 00:47:01.213180 kubelet[2558]: I1108 00:47:01.211736 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adb7bc91-4bbf-48f8-9c33-71efd8a1f16d-kube-proxy\") pod \"kube-proxy-czcz9\" (UID: \"adb7bc91-4bbf-48f8-9c33-71efd8a1f16d\") " pod="kube-system/kube-proxy-czcz9" Nov 8 00:47:01.213180 kubelet[2558]: I1108 00:47:01.211788 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adb7bc91-4bbf-48f8-9c33-71efd8a1f16d-xtables-lock\") pod \"kube-proxy-czcz9\" (UID: \"adb7bc91-4bbf-48f8-9c33-71efd8a1f16d\") " pod="kube-system/kube-proxy-czcz9" Nov 8 00:47:01.213180 kubelet[2558]: I1108 00:47:01.211815 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adb7bc91-4bbf-48f8-9c33-71efd8a1f16d-lib-modules\") pod \"kube-proxy-czcz9\" (UID: \"adb7bc91-4bbf-48f8-9c33-71efd8a1f16d\") " pod="kube-system/kube-proxy-czcz9" Nov 8 00:47:01.213180 kubelet[2558]: I1108 00:47:01.211830 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp7ht\" (UniqueName: \"kubernetes.io/projected/adb7bc91-4bbf-48f8-9c33-71efd8a1f16d-kube-api-access-kp7ht\") pod \"kube-proxy-czcz9\" (UID: \"adb7bc91-4bbf-48f8-9c33-71efd8a1f16d\") " pod="kube-system/kube-proxy-czcz9" Nov 8 00:47:01.377959 systemd[1]: Created slice kubepods-besteffort-podd377a4cc_3e05_41c2_b467_ce71d4c85ea4.slice - libcontainer container kubepods-besteffort-podd377a4cc_3e05_41c2_b467_ce71d4c85ea4.slice. Nov 8 00:47:01.455633 kubelet[2558]: E1108 00:47:01.455352 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:01.462606 containerd[1471]: time="2025-11-08T00:47:01.462452149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czcz9,Uid:adb7bc91-4bbf-48f8-9c33-71efd8a1f16d,Namespace:kube-system,Attempt:0,}" Nov 8 00:47:01.562045 kubelet[2558]: I1108 00:47:01.560523 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pxfx\" (UniqueName: \"kubernetes.io/projected/d377a4cc-3e05-41c2-b467-ce71d4c85ea4-kube-api-access-5pxfx\") pod \"tigera-operator-65cdcdfd6d-v4cr8\" (UID: \"d377a4cc-3e05-41c2-b467-ce71d4c85ea4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v4cr8" Nov 8 00:47:01.562045 kubelet[2558]: I1108 00:47:01.560605 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d377a4cc-3e05-41c2-b467-ce71d4c85ea4-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-v4cr8\" (UID: \"d377a4cc-3e05-41c2-b467-ce71d4c85ea4\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-v4cr8" Nov 8 00:47:01.585926 containerd[1471]: time="2025-11-08T00:47:01.585762319Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:01.585926 containerd[1471]: time="2025-11-08T00:47:01.585913314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:01.586205 containerd[1471]: time="2025-11-08T00:47:01.585945647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:01.586388 containerd[1471]: time="2025-11-08T00:47:01.586324086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:01.699376 systemd[1]: Started cri-containerd-78af900cb4fd966637196e7c17083f103ba198eee1b4c34045ff2393be918290.scope - libcontainer container 78af900cb4fd966637196e7c17083f103ba198eee1b4c34045ff2393be918290. Nov 8 00:47:01.753749 containerd[1471]: time="2025-11-08T00:47:01.753181643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-czcz9,Uid:adb7bc91-4bbf-48f8-9c33-71efd8a1f16d,Namespace:kube-system,Attempt:0,} returns sandbox id \"78af900cb4fd966637196e7c17083f103ba198eee1b4c34045ff2393be918290\"" Nov 8 00:47:01.754723 kubelet[2558]: E1108 00:47:01.754643 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:01.761257 containerd[1471]: time="2025-11-08T00:47:01.761218212Z" level=info msg="CreateContainer within sandbox \"78af900cb4fd966637196e7c17083f103ba198eee1b4c34045ff2393be918290\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:47:01.779193 containerd[1471]: time="2025-11-08T00:47:01.779026836Z" level=info msg="CreateContainer within sandbox \"78af900cb4fd966637196e7c17083f103ba198eee1b4c34045ff2393be918290\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3857b633ed9400fed2dad75d2832ff02df1fd51e2d9adc6d5c0037effb49ce7f\"" Nov 8 00:47:01.780187 containerd[1471]: time="2025-11-08T00:47:01.780093813Z" level=info msg="StartContainer for \"3857b633ed9400fed2dad75d2832ff02df1fd51e2d9adc6d5c0037effb49ce7f\"" Nov 8 00:47:01.855341 systemd[1]: Started cri-containerd-3857b633ed9400fed2dad75d2832ff02df1fd51e2d9adc6d5c0037effb49ce7f.scope - libcontainer container 3857b633ed9400fed2dad75d2832ff02df1fd51e2d9adc6d5c0037effb49ce7f. Nov 8 00:47:01.922686 containerd[1471]: time="2025-11-08T00:47:01.922505628Z" level=info msg="StartContainer for \"3857b633ed9400fed2dad75d2832ff02df1fd51e2d9adc6d5c0037effb49ce7f\" returns successfully" Nov 8 00:47:01.989638 containerd[1471]: time="2025-11-08T00:47:01.989041600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v4cr8,Uid:d377a4cc-3e05-41c2-b467-ce71d4c85ea4,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:47:02.167423 kubelet[2558]: E1108 00:47:02.166836 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:02.192292 kubelet[2558]: I1108 00:47:02.183637 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-czcz9" podStartSLOduration=1.183571615 podStartE2EDuration="1.183571615s" podCreationTimestamp="2025-11-08 00:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:47:02.183043995 +0000 UTC m=+8.325599595" watchObservedRunningTime="2025-11-08 00:47:02.183571615 +0000 UTC m=+8.326127215" Nov 8 00:47:02.197443 containerd[1471]: time="2025-11-08T00:47:02.197337434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:02.197726 containerd[1471]: time="2025-11-08T00:47:02.197690546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:02.198645 containerd[1471]: time="2025-11-08T00:47:02.198600243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:02.198869 containerd[1471]: time="2025-11-08T00:47:02.198832344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:02.258617 kubelet[2558]: E1108 00:47:02.258573 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:02.289309 systemd[1]: Started cri-containerd-64bb37a498d5a07ab21ea8b27245f8e30e62a0bec8d6554724762809f778a066.scope - libcontainer container 64bb37a498d5a07ab21ea8b27245f8e30e62a0bec8d6554724762809f778a066. Nov 8 00:47:02.336078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27723989.mount: Deactivated successfully. Nov 8 00:47:02.353100 containerd[1471]: time="2025-11-08T00:47:02.353024335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-v4cr8,Uid:d377a4cc-3e05-41c2-b467-ce71d4c85ea4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"64bb37a498d5a07ab21ea8b27245f8e30e62a0bec8d6554724762809f778a066\"" Nov 8 00:47:02.357299 containerd[1471]: time="2025-11-08T00:47:02.357260665Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:47:03.171171 kubelet[2558]: E1108 00:47:03.170882 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:03.361319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount7152082.mount: Deactivated successfully. Nov 8 00:47:05.390989 containerd[1471]: time="2025-11-08T00:47:05.390941991Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:05.391916 containerd[1471]: time="2025-11-08T00:47:05.391869224Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:47:05.392802 containerd[1471]: time="2025-11-08T00:47:05.392433427Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:05.394825 containerd[1471]: time="2025-11-08T00:47:05.394799971Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:05.396418 containerd[1471]: time="2025-11-08T00:47:05.396382633Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.039075403s" Nov 8 00:47:05.396468 containerd[1471]: time="2025-11-08T00:47:05.396419806Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:47:05.400419 containerd[1471]: time="2025-11-08T00:47:05.400370883Z" level=info msg="CreateContainer within sandbox \"64bb37a498d5a07ab21ea8b27245f8e30e62a0bec8d6554724762809f778a066\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:47:05.419093 containerd[1471]: time="2025-11-08T00:47:05.419050282Z" level=info msg="CreateContainer within sandbox \"64bb37a498d5a07ab21ea8b27245f8e30e62a0bec8d6554724762809f778a066\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7fdeaa43d83f7973e17d1c076be3a42decd65b52d9a8134fd9c1e79f22b86ca7\"" Nov 8 00:47:05.419934 containerd[1471]: time="2025-11-08T00:47:05.419768748Z" level=info msg="StartContainer for \"7fdeaa43d83f7973e17d1c076be3a42decd65b52d9a8134fd9c1e79f22b86ca7\"" Nov 8 00:47:05.476284 systemd[1]: Started cri-containerd-7fdeaa43d83f7973e17d1c076be3a42decd65b52d9a8134fd9c1e79f22b86ca7.scope - libcontainer container 7fdeaa43d83f7973e17d1c076be3a42decd65b52d9a8134fd9c1e79f22b86ca7. Nov 8 00:47:05.522175 containerd[1471]: time="2025-11-08T00:47:05.520575989Z" level=info msg="StartContainer for \"7fdeaa43d83f7973e17d1c076be3a42decd65b52d9a8134fd9c1e79f22b86ca7\" returns successfully" Nov 8 00:47:05.998928 kubelet[2558]: E1108 00:47:05.998130 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:06.179577 kubelet[2558]: E1108 00:47:06.179527 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:06.191830 kubelet[2558]: I1108 00:47:06.191689 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-v4cr8" podStartSLOduration=2.14971988 podStartE2EDuration="5.191671649s" podCreationTimestamp="2025-11-08 00:47:01 +0000 UTC" firstStartedPulling="2025-11-08 00:47:02.355394629 +0000 UTC m=+8.497950239" lastFinishedPulling="2025-11-08 00:47:05.397346398 +0000 UTC m=+11.539902008" observedRunningTime="2025-11-08 00:47:06.191198534 +0000 UTC m=+12.333754134" watchObservedRunningTime="2025-11-08 00:47:06.191671649 +0000 UTC m=+12.334227249" Nov 8 00:47:14.068751 sudo[1681]: pam_unix(sudo:session): session closed for user root Nov 8 00:47:14.132130 sshd[1678]: pam_unix(sshd:session): session closed for user core Nov 8 00:47:14.141110 systemd[1]: sshd@6-172.239.57.65:22-147.75.109.163:43048.service: Deactivated successfully. Nov 8 00:47:14.145784 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:47:14.147091 systemd[1]: session-7.scope: Consumed 10.229s CPU time, 162.1M memory peak, 0B memory swap peak. Nov 8 00:47:14.150109 systemd-logind[1442]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:47:14.154400 systemd-logind[1442]: Removed session 7. Nov 8 00:47:20.543437 systemd[1]: Created slice kubepods-besteffort-pod0bf4afdb_25ff_4095_8c12_90085c98f885.slice - libcontainer container kubepods-besteffort-pod0bf4afdb_25ff_4095_8c12_90085c98f885.slice. Nov 8 00:47:20.588686 kubelet[2558]: I1108 00:47:20.588573 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0bf4afdb-25ff-4095-8c12-90085c98f885-typha-certs\") pod \"calico-typha-584cc6dfd5-9mln9\" (UID: \"0bf4afdb-25ff-4095-8c12-90085c98f885\") " pod="calico-system/calico-typha-584cc6dfd5-9mln9" Nov 8 00:47:20.589835 kubelet[2558]: I1108 00:47:20.588680 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0bf4afdb-25ff-4095-8c12-90085c98f885-tigera-ca-bundle\") pod \"calico-typha-584cc6dfd5-9mln9\" (UID: \"0bf4afdb-25ff-4095-8c12-90085c98f885\") " pod="calico-system/calico-typha-584cc6dfd5-9mln9" Nov 8 00:47:20.589835 kubelet[2558]: I1108 00:47:20.588830 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkbxc\" (UniqueName: \"kubernetes.io/projected/0bf4afdb-25ff-4095-8c12-90085c98f885-kube-api-access-jkbxc\") pod \"calico-typha-584cc6dfd5-9mln9\" (UID: \"0bf4afdb-25ff-4095-8c12-90085c98f885\") " pod="calico-system/calico-typha-584cc6dfd5-9mln9" Nov 8 00:47:20.769030 systemd[1]: Created slice kubepods-besteffort-pod5e48746b_af0f_494a_87a0_07cf5a69d342.slice - libcontainer container kubepods-besteffort-pod5e48746b_af0f_494a_87a0_07cf5a69d342.slice. Nov 8 00:47:20.792922 kubelet[2558]: I1108 00:47:20.792550 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzm4\" (UniqueName: \"kubernetes.io/projected/5e48746b-af0f-494a-87a0-07cf5a69d342-kube-api-access-rzzm4\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.792922 kubelet[2558]: I1108 00:47:20.792604 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e48746b-af0f-494a-87a0-07cf5a69d342-node-certs\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.792922 kubelet[2558]: I1108 00:47:20.792621 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-policysync\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.792922 kubelet[2558]: I1108 00:47:20.792661 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-var-run-calico\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.792922 kubelet[2558]: I1108 00:47:20.792694 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-cni-bin-dir\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793232 kubelet[2558]: I1108 00:47:20.792710 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-lib-modules\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793232 kubelet[2558]: I1108 00:47:20.792724 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-xtables-lock\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793232 kubelet[2558]: I1108 00:47:20.792740 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-cni-log-dir\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793232 kubelet[2558]: I1108 00:47:20.792754 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e48746b-af0f-494a-87a0-07cf5a69d342-tigera-ca-bundle\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793232 kubelet[2558]: I1108 00:47:20.792773 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-var-lib-calico\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793378 kubelet[2558]: I1108 00:47:20.792796 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-flexvol-driver-host\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.793378 kubelet[2558]: I1108 00:47:20.792813 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e48746b-af0f-494a-87a0-07cf5a69d342-cni-net-dir\") pod \"calico-node-rvxhz\" (UID: \"5e48746b-af0f-494a-87a0-07cf5a69d342\") " pod="calico-system/calico-node-rvxhz" Nov 8 00:47:20.854568 kubelet[2558]: E1108 00:47:20.854524 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:20.856717 containerd[1471]: time="2025-11-08T00:47:20.855892197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-584cc6dfd5-9mln9,Uid:0bf4afdb-25ff-4095-8c12-90085c98f885,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:20.997998 kubelet[2558]: E1108 00:47:20.996961 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:21.000427 kubelet[2558]: E1108 00:47:21.000027 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.001196 kubelet[2558]: W1108 00:47:21.001127 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.002322 kubelet[2558]: E1108 00:47:21.002285 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.004547 kubelet[2558]: E1108 00:47:21.004444 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.006118 kubelet[2558]: W1108 00:47:21.006101 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.006250 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.006951 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.019708 kubelet[2558]: W1108 00:47:21.006964 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.007291 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.009712 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.019708 kubelet[2558]: W1108 00:47:21.009723 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.009735 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.011796 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.019708 kubelet[2558]: W1108 00:47:21.011808 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.019708 kubelet[2558]: E1108 00:47:21.011820 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.013024 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.020070 kubelet[2558]: W1108 00:47:21.013037 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.013048 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.014861 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.020070 kubelet[2558]: W1108 00:47:21.014990 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.015006 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.016058 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.020070 kubelet[2558]: W1108 00:47:21.016072 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.016082 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.020070 kubelet[2558]: E1108 00:47:21.017417 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.020368 kubelet[2558]: W1108 00:47:21.017428 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.020368 kubelet[2558]: E1108 00:47:21.017677 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.030189 kubelet[2558]: E1108 00:47:21.030161 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.030291 kubelet[2558]: W1108 00:47:21.030200 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.030291 kubelet[2558]: E1108 00:47:21.030216 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.030636 kubelet[2558]: E1108 00:47:21.030616 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.030636 kubelet[2558]: W1108 00:47:21.030631 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.030711 kubelet[2558]: E1108 00:47:21.030641 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.030974 kubelet[2558]: E1108 00:47:21.030955 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.030974 kubelet[2558]: W1108 00:47:21.030969 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.031043 kubelet[2558]: E1108 00:47:21.030979 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.031426 kubelet[2558]: E1108 00:47:21.031386 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.031426 kubelet[2558]: W1108 00:47:21.031417 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.031515 kubelet[2558]: E1108 00:47:21.031427 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.031795 kubelet[2558]: E1108 00:47:21.031775 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.031795 kubelet[2558]: W1108 00:47:21.031790 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.031876 kubelet[2558]: E1108 00:47:21.031814 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.032109 kubelet[2558]: E1108 00:47:21.032059 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.032109 kubelet[2558]: W1108 00:47:21.032090 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.032214 kubelet[2558]: E1108 00:47:21.032111 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.032400 kubelet[2558]: E1108 00:47:21.032382 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.032400 kubelet[2558]: W1108 00:47:21.032396 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.032469 kubelet[2558]: E1108 00:47:21.032405 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.032666 kubelet[2558]: E1108 00:47:21.032649 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.032666 kubelet[2558]: W1108 00:47:21.032662 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.032730 kubelet[2558]: E1108 00:47:21.032670 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.033363 kubelet[2558]: E1108 00:47:21.032951 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.033363 kubelet[2558]: W1108 00:47:21.032968 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.033363 kubelet[2558]: E1108 00:47:21.032981 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.033363 kubelet[2558]: E1108 00:47:21.033223 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.033363 kubelet[2558]: W1108 00:47:21.033232 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.033363 kubelet[2558]: E1108 00:47:21.033241 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.033530 kubelet[2558]: E1108 00:47:21.033477 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.033530 kubelet[2558]: W1108 00:47:21.033486 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.033530 kubelet[2558]: E1108 00:47:21.033495 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.048693 containerd[1471]: time="2025-11-08T00:47:21.048545625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:21.048803 containerd[1471]: time="2025-11-08T00:47:21.048734009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:21.048803 containerd[1471]: time="2025-11-08T00:47:21.048772331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:21.049032 containerd[1471]: time="2025-11-08T00:47:21.048972776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:21.075830 kubelet[2558]: E1108 00:47:21.075710 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:21.113962 containerd[1471]: time="2025-11-08T00:47:21.097516656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvxhz,Uid:5e48746b-af0f-494a-87a0-07cf5a69d342,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:21.114076 kubelet[2558]: E1108 00:47:21.100217 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114076 kubelet[2558]: W1108 00:47:21.100235 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114076 kubelet[2558]: E1108 00:47:21.100257 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114076 kubelet[2558]: I1108 00:47:21.100851 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1dbab252-cddb-4b1b-96da-a6419c1af573-varrun\") pod \"csi-node-driver-7zs58\" (UID: \"1dbab252-cddb-4b1b-96da-a6419c1af573\") " pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:21.114076 kubelet[2558]: E1108 00:47:21.102248 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114076 kubelet[2558]: W1108 00:47:21.102261 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114076 kubelet[2558]: E1108 00:47:21.102273 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114076 kubelet[2558]: I1108 00:47:21.102287 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1dbab252-cddb-4b1b-96da-a6419c1af573-kubelet-dir\") pod \"csi-node-driver-7zs58\" (UID: \"1dbab252-cddb-4b1b-96da-a6419c1af573\") " pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:21.114076 kubelet[2558]: E1108 00:47:21.102601 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114559 kubelet[2558]: W1108 00:47:21.102611 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114559 kubelet[2558]: E1108 00:47:21.102620 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114559 kubelet[2558]: I1108 00:47:21.102668 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1dbab252-cddb-4b1b-96da-a6419c1af573-socket-dir\") pod \"csi-node-driver-7zs58\" (UID: \"1dbab252-cddb-4b1b-96da-a6419c1af573\") " pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:21.114559 kubelet[2558]: E1108 00:47:21.103569 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114559 kubelet[2558]: W1108 00:47:21.103579 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114559 kubelet[2558]: E1108 00:47:21.103619 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114559 kubelet[2558]: I1108 00:47:21.103634 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkbqd\" (UniqueName: \"kubernetes.io/projected/1dbab252-cddb-4b1b-96da-a6419c1af573-kube-api-access-kkbqd\") pod \"csi-node-driver-7zs58\" (UID: \"1dbab252-cddb-4b1b-96da-a6419c1af573\") " pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:21.114559 kubelet[2558]: E1108 00:47:21.104179 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114742 kubelet[2558]: W1108 00:47:21.104194 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114742 kubelet[2558]: E1108 00:47:21.104203 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114742 kubelet[2558]: I1108 00:47:21.104239 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1dbab252-cddb-4b1b-96da-a6419c1af573-registration-dir\") pod \"csi-node-driver-7zs58\" (UID: \"1dbab252-cddb-4b1b-96da-a6419c1af573\") " pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:21.114742 kubelet[2558]: E1108 00:47:21.104921 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114742 kubelet[2558]: W1108 00:47:21.104964 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114742 kubelet[2558]: E1108 00:47:21.104975 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114742 kubelet[2558]: E1108 00:47:21.105523 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114742 kubelet[2558]: W1108 00:47:21.105533 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114742 kubelet[2558]: E1108 00:47:21.105543 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.106601 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114950 kubelet[2558]: W1108 00:47:21.106627 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.106659 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.107020 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114950 kubelet[2558]: W1108 00:47:21.107041 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.107107 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.107933 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.114950 kubelet[2558]: W1108 00:47:21.107947 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.107958 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.114950 kubelet[2558]: E1108 00:47:21.108302 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.115231 kubelet[2558]: W1108 00:47:21.108312 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.115231 kubelet[2558]: E1108 00:47:21.108351 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.115231 kubelet[2558]: E1108 00:47:21.108649 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.115231 kubelet[2558]: W1108 00:47:21.108679 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.115231 kubelet[2558]: E1108 00:47:21.108690 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.115231 kubelet[2558]: E1108 00:47:21.109018 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.115231 kubelet[2558]: W1108 00:47:21.109028 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.115231 kubelet[2558]: E1108 00:47:21.109038 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.115231 kubelet[2558]: E1108 00:47:21.109322 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.115231 kubelet[2558]: W1108 00:47:21.109353 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.115430 kubelet[2558]: E1108 00:47:21.109371 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.115430 kubelet[2558]: E1108 00:47:21.109603 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.115430 kubelet[2558]: W1108 00:47:21.109612 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.115430 kubelet[2558]: E1108 00:47:21.109621 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.209484 kubelet[2558]: E1108 00:47:21.209357 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.209484 kubelet[2558]: W1108 00:47:21.209385 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.209484 kubelet[2558]: E1108 00:47:21.209409 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.209970 kubelet[2558]: E1108 00:47:21.209852 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.209970 kubelet[2558]: W1108 00:47:21.209867 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.209970 kubelet[2558]: E1108 00:47:21.209877 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.210560 kubelet[2558]: E1108 00:47:21.210396 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.210560 kubelet[2558]: W1108 00:47:21.210409 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.210560 kubelet[2558]: E1108 00:47:21.210418 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.210827 kubelet[2558]: E1108 00:47:21.210696 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.210827 kubelet[2558]: W1108 00:47:21.210707 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.210827 kubelet[2558]: E1108 00:47:21.210716 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.211365 kubelet[2558]: E1108 00:47:21.211027 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.211365 kubelet[2558]: W1108 00:47:21.211038 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.211365 kubelet[2558]: E1108 00:47:21.211048 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.211595 kubelet[2558]: E1108 00:47:21.211576 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.211595 kubelet[2558]: W1108 00:47:21.211603 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.211691 kubelet[2558]: E1108 00:47:21.211613 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.211981 kubelet[2558]: E1108 00:47:21.211869 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.211981 kubelet[2558]: W1108 00:47:21.211881 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.211981 kubelet[2558]: E1108 00:47:21.211900 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.212925 kubelet[2558]: E1108 00:47:21.212482 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.212925 kubelet[2558]: W1108 00:47:21.212493 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.212925 kubelet[2558]: E1108 00:47:21.212502 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.212925 kubelet[2558]: E1108 00:47:21.212789 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.212925 kubelet[2558]: W1108 00:47:21.212799 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.212925 kubelet[2558]: E1108 00:47:21.212807 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.213846 kubelet[2558]: E1108 00:47:21.213195 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.213846 kubelet[2558]: W1108 00:47:21.213204 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.213846 kubelet[2558]: E1108 00:47:21.213213 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.215417 kubelet[2558]: E1108 00:47:21.215393 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.215417 kubelet[2558]: W1108 00:47:21.215412 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.215509 kubelet[2558]: E1108 00:47:21.215423 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.218768 kubelet[2558]: E1108 00:47:21.216388 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.218768 kubelet[2558]: W1108 00:47:21.216402 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.218768 kubelet[2558]: E1108 00:47:21.216413 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.218768 kubelet[2558]: E1108 00:47:21.218735 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.218768 kubelet[2558]: W1108 00:47:21.218746 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.218768 kubelet[2558]: E1108 00:47:21.218756 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.219242 kubelet[2558]: E1108 00:47:21.218986 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.219242 kubelet[2558]: W1108 00:47:21.218998 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.219242 kubelet[2558]: E1108 00:47:21.219007 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.220418 kubelet[2558]: E1108 00:47:21.220329 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.220418 kubelet[2558]: W1108 00:47:21.220344 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.220418 kubelet[2558]: E1108 00:47:21.220355 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.222200 kubelet[2558]: E1108 00:47:21.221495 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.222200 kubelet[2558]: W1108 00:47:21.221509 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.222200 kubelet[2558]: E1108 00:47:21.221519 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.223273 kubelet[2558]: E1108 00:47:21.222276 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.223273 kubelet[2558]: W1108 00:47:21.222286 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.223273 kubelet[2558]: E1108 00:47:21.222296 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.223585 kubelet[2558]: E1108 00:47:21.223436 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.223585 kubelet[2558]: W1108 00:47:21.223449 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.223585 kubelet[2558]: E1108 00:47:21.223459 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.224537 kubelet[2558]: E1108 00:47:21.224318 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.224537 kubelet[2558]: W1108 00:47:21.224333 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.224537 kubelet[2558]: E1108 00:47:21.224344 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.226863 kubelet[2558]: E1108 00:47:21.225054 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.226863 kubelet[2558]: W1108 00:47:21.225066 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.226863 kubelet[2558]: E1108 00:47:21.225076 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.226863 kubelet[2558]: E1108 00:47:21.226191 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.226863 kubelet[2558]: W1108 00:47:21.226201 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.226863 kubelet[2558]: E1108 00:47:21.226211 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.227213 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.230042 kubelet[2558]: W1108 00:47:21.227230 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.227240 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.227673 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.230042 kubelet[2558]: W1108 00:47:21.227684 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.227694 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.229198 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.230042 kubelet[2558]: W1108 00:47:21.229210 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.229221 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.230042 kubelet[2558]: E1108 00:47:21.229850 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.230576 kubelet[2558]: W1108 00:47:21.229872 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.230576 kubelet[2558]: E1108 00:47:21.229884 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.234693 containerd[1471]: time="2025-11-08T00:47:21.234517623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:21.234693 containerd[1471]: time="2025-11-08T00:47:21.234618655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:21.234693 containerd[1471]: time="2025-11-08T00:47:21.234666636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:21.236858 containerd[1471]: time="2025-11-08T00:47:21.235943861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:21.237201 systemd[1]: Started cri-containerd-dd87952d4310b9530e8d3cd6063181be8f201f0774c38d2d0adf267b48882e14.scope - libcontainer container dd87952d4310b9530e8d3cd6063181be8f201f0774c38d2d0adf267b48882e14. Nov 8 00:47:21.332304 kubelet[2558]: E1108 00:47:21.324624 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:21.332304 kubelet[2558]: W1108 00:47:21.324646 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:21.332304 kubelet[2558]: E1108 00:47:21.324665 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:21.388325 systemd[1]: Started cri-containerd-6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4.scope - libcontainer container 6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4. Nov 8 00:47:21.439585 containerd[1471]: time="2025-11-08T00:47:21.439018670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-584cc6dfd5-9mln9,Uid:0bf4afdb-25ff-4095-8c12-90085c98f885,Namespace:calico-system,Attempt:0,} returns sandbox id \"dd87952d4310b9530e8d3cd6063181be8f201f0774c38d2d0adf267b48882e14\"" Nov 8 00:47:21.439984 kubelet[2558]: E1108 00:47:21.439963 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:21.441472 containerd[1471]: time="2025-11-08T00:47:21.441347094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:47:21.478253 containerd[1471]: time="2025-11-08T00:47:21.477954921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvxhz,Uid:5e48746b-af0f-494a-87a0-07cf5a69d342,Namespace:calico-system,Attempt:0,} returns sandbox id \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\"" Nov 8 00:47:21.479343 kubelet[2558]: E1108 00:47:21.478844 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:22.093806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3142647280.mount: Deactivated successfully. Nov 8 00:47:23.090666 kubelet[2558]: E1108 00:47:23.090254 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:23.951182 containerd[1471]: time="2025-11-08T00:47:23.950405197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:23.951935 containerd[1471]: time="2025-11-08T00:47:23.951630985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:47:23.954162 containerd[1471]: time="2025-11-08T00:47:23.953038469Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:23.956280 containerd[1471]: time="2025-11-08T00:47:23.956232544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:23.956997 containerd[1471]: time="2025-11-08T00:47:23.956972091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.515107864s" Nov 8 00:47:23.957109 containerd[1471]: time="2025-11-08T00:47:23.957092324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:47:23.959596 containerd[1471]: time="2025-11-08T00:47:23.959555423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:47:23.983742 containerd[1471]: time="2025-11-08T00:47:23.983591560Z" level=info msg="CreateContainer within sandbox \"dd87952d4310b9530e8d3cd6063181be8f201f0774c38d2d0adf267b48882e14\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:47:24.005919 containerd[1471]: time="2025-11-08T00:47:24.005857607Z" level=info msg="CreateContainer within sandbox \"dd87952d4310b9530e8d3cd6063181be8f201f0774c38d2d0adf267b48882e14\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8464c79312303e0a71f677bdb8adc2ec2ac49c385fbe8bce38e1910dfbdff8f6\"" Nov 8 00:47:24.010606 containerd[1471]: time="2025-11-08T00:47:24.009844275Z" level=info msg="StartContainer for \"8464c79312303e0a71f677bdb8adc2ec2ac49c385fbe8bce38e1910dfbdff8f6\"" Nov 8 00:47:24.140286 systemd[1]: Started cri-containerd-8464c79312303e0a71f677bdb8adc2ec2ac49c385fbe8bce38e1910dfbdff8f6.scope - libcontainer container 8464c79312303e0a71f677bdb8adc2ec2ac49c385fbe8bce38e1910dfbdff8f6. Nov 8 00:47:24.191963 containerd[1471]: time="2025-11-08T00:47:24.191923052Z" level=info msg="StartContainer for \"8464c79312303e0a71f677bdb8adc2ec2ac49c385fbe8bce38e1910dfbdff8f6\" returns successfully" Nov 8 00:47:24.363741 kubelet[2558]: E1108 00:47:24.361632 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:24.588918 kubelet[2558]: E1108 00:47:24.588076 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.588918 kubelet[2558]: W1108 00:47:24.588127 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.588918 kubelet[2558]: E1108 00:47:24.588209 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.588918 kubelet[2558]: E1108 00:47:24.588722 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.588918 kubelet[2558]: W1108 00:47:24.588731 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.588918 kubelet[2558]: E1108 00:47:24.588741 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.588918 kubelet[2558]: E1108 00:47:24.588993 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.588918 kubelet[2558]: W1108 00:47:24.589003 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.588918 kubelet[2558]: E1108 00:47:24.589012 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.589759 kubelet[2558]: E1108 00:47:24.589423 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.589759 kubelet[2558]: W1108 00:47:24.589433 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.589759 kubelet[2558]: E1108 00:47:24.589443 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.591896 kubelet[2558]: E1108 00:47:24.591052 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.591896 kubelet[2558]: W1108 00:47:24.591068 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.591896 kubelet[2558]: E1108 00:47:24.591080 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.591896 kubelet[2558]: E1108 00:47:24.591601 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.591896 kubelet[2558]: W1108 00:47:24.591611 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.591896 kubelet[2558]: E1108 00:47:24.591620 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.592306 kubelet[2558]: E1108 00:47:24.591935 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.592306 kubelet[2558]: W1108 00:47:24.591944 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.592306 kubelet[2558]: E1108 00:47:24.591953 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.592306 kubelet[2558]: E1108 00:47:24.592295 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.592306 kubelet[2558]: W1108 00:47:24.592306 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.592411 kubelet[2558]: E1108 00:47:24.592315 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.592901 kubelet[2558]: E1108 00:47:24.592626 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.592901 kubelet[2558]: W1108 00:47:24.592638 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.592901 kubelet[2558]: E1108 00:47:24.592741 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.593250 kubelet[2558]: E1108 00:47:24.593085 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.593250 kubelet[2558]: W1108 00:47:24.593096 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.593250 kubelet[2558]: E1108 00:47:24.593106 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.594376 kubelet[2558]: E1108 00:47:24.593417 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.594376 kubelet[2558]: W1108 00:47:24.593429 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.594376 kubelet[2558]: E1108 00:47:24.593438 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.594376 kubelet[2558]: E1108 00:47:24.593726 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.594376 kubelet[2558]: W1108 00:47:24.593736 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.594376 kubelet[2558]: E1108 00:47:24.593744 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.594376 kubelet[2558]: E1108 00:47:24.594238 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.594376 kubelet[2558]: W1108 00:47:24.594248 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.594376 kubelet[2558]: E1108 00:47:24.594257 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.594604 kubelet[2558]: E1108 00:47:24.594534 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.594604 kubelet[2558]: W1108 00:47:24.594543 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.594604 kubelet[2558]: E1108 00:47:24.594551 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.595501 kubelet[2558]: E1108 00:47:24.594824 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.595501 kubelet[2558]: W1108 00:47:24.594837 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.595501 kubelet[2558]: E1108 00:47:24.594846 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.595501 kubelet[2558]: E1108 00:47:24.595375 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.595501 kubelet[2558]: W1108 00:47:24.595385 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.595501 kubelet[2558]: E1108 00:47:24.595394 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.595834 kubelet[2558]: E1108 00:47:24.595713 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.595834 kubelet[2558]: W1108 00:47:24.595727 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.595834 kubelet[2558]: E1108 00:47:24.595736 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.597388 kubelet[2558]: E1108 00:47:24.597367 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.597388 kubelet[2558]: W1108 00:47:24.597384 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.597465 kubelet[2558]: E1108 00:47:24.597394 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.597685 kubelet[2558]: E1108 00:47:24.597659 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.597685 kubelet[2558]: W1108 00:47:24.597677 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.597797 kubelet[2558]: E1108 00:47:24.597689 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.597997 kubelet[2558]: E1108 00:47:24.597974 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.598060 kubelet[2558]: W1108 00:47:24.597995 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.598060 kubelet[2558]: E1108 00:47:24.598011 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.598292 kubelet[2558]: E1108 00:47:24.598263 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.598292 kubelet[2558]: W1108 00:47:24.598275 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.598292 kubelet[2558]: E1108 00:47:24.598285 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.598615 kubelet[2558]: E1108 00:47:24.598555 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.598615 kubelet[2558]: W1108 00:47:24.598570 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.598615 kubelet[2558]: E1108 00:47:24.598580 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.599913 kubelet[2558]: E1108 00:47:24.599291 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.599913 kubelet[2558]: W1108 00:47:24.599304 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.599913 kubelet[2558]: E1108 00:47:24.599314 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.601514 kubelet[2558]: E1108 00:47:24.601441 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.601514 kubelet[2558]: W1108 00:47:24.601459 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.601514 kubelet[2558]: E1108 00:47:24.601470 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.601885 kubelet[2558]: E1108 00:47:24.601693 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.601885 kubelet[2558]: W1108 00:47:24.601702 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.601885 kubelet[2558]: E1108 00:47:24.601711 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.601963 kubelet[2558]: E1108 00:47:24.601912 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.601963 kubelet[2558]: W1108 00:47:24.601920 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.601963 kubelet[2558]: E1108 00:47:24.601929 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.603609 kubelet[2558]: E1108 00:47:24.603568 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.603609 kubelet[2558]: W1108 00:47:24.603582 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.603609 kubelet[2558]: E1108 00:47:24.603594 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.605543 kubelet[2558]: E1108 00:47:24.605523 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.605543 kubelet[2558]: W1108 00:47:24.605544 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.605619 kubelet[2558]: E1108 00:47:24.605556 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.606161 kubelet[2558]: E1108 00:47:24.605789 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.606161 kubelet[2558]: W1108 00:47:24.605802 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.606161 kubelet[2558]: E1108 00:47:24.605819 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.606161 kubelet[2558]: E1108 00:47:24.606086 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.606161 kubelet[2558]: W1108 00:47:24.606096 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.606161 kubelet[2558]: E1108 00:47:24.606104 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.607408 kubelet[2558]: E1108 00:47:24.607187 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.607408 kubelet[2558]: W1108 00:47:24.607391 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.607408 kubelet[2558]: E1108 00:47:24.607401 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.607912 kubelet[2558]: E1108 00:47:24.607746 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.608547 kubelet[2558]: W1108 00:47:24.607759 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.608547 kubelet[2558]: E1108 00:47:24.608526 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:24.610520 kubelet[2558]: E1108 00:47:24.610494 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:24.610520 kubelet[2558]: W1108 00:47:24.610511 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:24.610520 kubelet[2558]: E1108 00:47:24.610522 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.075169 kubelet[2558]: E1108 00:47:25.071492 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:25.141188 containerd[1471]: time="2025-11-08T00:47:25.140873231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:25.142579 containerd[1471]: time="2025-11-08T00:47:25.142111137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:47:25.143118 containerd[1471]: time="2025-11-08T00:47:25.143085827Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:25.146228 containerd[1471]: time="2025-11-08T00:47:25.146122259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:25.146752 containerd[1471]: time="2025-11-08T00:47:25.146654790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.185342246s" Nov 8 00:47:25.146752 containerd[1471]: time="2025-11-08T00:47:25.146693401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:47:25.151109 containerd[1471]: time="2025-11-08T00:47:25.151063871Z" level=info msg="CreateContainer within sandbox \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:47:25.166792 containerd[1471]: time="2025-11-08T00:47:25.166707073Z" level=info msg="CreateContainer within sandbox \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91\"" Nov 8 00:47:25.168105 containerd[1471]: time="2025-11-08T00:47:25.167868877Z" level=info msg="StartContainer for \"c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91\"" Nov 8 00:47:25.292329 systemd[1]: Started cri-containerd-c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91.scope - libcontainer container c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91. Nov 8 00:47:25.380048 kubelet[2558]: I1108 00:47:25.379996 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:47:25.382556 kubelet[2558]: E1108 00:47:25.381746 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:25.391454 containerd[1471]: time="2025-11-08T00:47:25.391314784Z" level=info msg="StartContainer for \"c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91\" returns successfully" Nov 8 00:47:25.446622 kubelet[2558]: E1108 00:47:25.446343 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.446622 kubelet[2558]: W1108 00:47:25.446371 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.446622 kubelet[2558]: E1108 00:47:25.446417 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.447085 kubelet[2558]: E1108 00:47:25.446948 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.447085 kubelet[2558]: W1108 00:47:25.446961 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.447085 kubelet[2558]: E1108 00:47:25.446972 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.447376 kubelet[2558]: E1108 00:47:25.447272 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.447376 kubelet[2558]: W1108 00:47:25.447284 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.447376 kubelet[2558]: E1108 00:47:25.447294 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.447751 kubelet[2558]: E1108 00:47:25.447737 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.447864 kubelet[2558]: W1108 00:47:25.447815 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.447864 kubelet[2558]: E1108 00:47:25.447838 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.448299 kubelet[2558]: E1108 00:47:25.448286 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.448440 kubelet[2558]: W1108 00:47:25.448364 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.448440 kubelet[2558]: E1108 00:47:25.448391 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.448812 kubelet[2558]: E1108 00:47:25.448799 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.448961 kubelet[2558]: W1108 00:47:25.448869 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.448961 kubelet[2558]: E1108 00:47:25.448884 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.449278 kubelet[2558]: E1108 00:47:25.449265 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.449463 kubelet[2558]: W1108 00:47:25.449353 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.449463 kubelet[2558]: E1108 00:47:25.449371 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.449805 kubelet[2558]: E1108 00:47:25.449656 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.449805 kubelet[2558]: W1108 00:47:25.449668 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.449805 kubelet[2558]: E1108 00:47:25.449677 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.450092 kubelet[2558]: E1108 00:47:25.449982 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.450092 kubelet[2558]: W1108 00:47:25.449994 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.450092 kubelet[2558]: E1108 00:47:25.450003 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.450401 kubelet[2558]: E1108 00:47:25.450292 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.450401 kubelet[2558]: W1108 00:47:25.450304 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.450401 kubelet[2558]: E1108 00:47:25.450313 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.450665 kubelet[2558]: E1108 00:47:25.450550 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.450665 kubelet[2558]: W1108 00:47:25.450561 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.450665 kubelet[2558]: E1108 00:47:25.450569 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.451109 kubelet[2558]: E1108 00:47:25.451095 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.452072 kubelet[2558]: W1108 00:47:25.452044 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.452202 kubelet[2558]: E1108 00:47:25.452188 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.452658 kubelet[2558]: E1108 00:47:25.452548 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.452658 kubelet[2558]: W1108 00:47:25.452562 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.452658 kubelet[2558]: E1108 00:47:25.452572 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.453547 kubelet[2558]: E1108 00:47:25.453309 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.453811 kubelet[2558]: W1108 00:47:25.453636 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.453811 kubelet[2558]: E1108 00:47:25.453654 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.454024 kubelet[2558]: E1108 00:47:25.453910 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.454024 kubelet[2558]: W1108 00:47:25.453921 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.454024 kubelet[2558]: E1108 00:47:25.453933 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.454554 kubelet[2558]: E1108 00:47:25.454354 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.454554 kubelet[2558]: W1108 00:47:25.454368 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.454554 kubelet[2558]: E1108 00:47:25.454378 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.454732 kubelet[2558]: E1108 00:47:25.454708 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.454785 kubelet[2558]: W1108 00:47:25.454734 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.454785 kubelet[2558]: E1108 00:47:25.454748 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.455667 kubelet[2558]: E1108 00:47:25.455639 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.455667 kubelet[2558]: W1108 00:47:25.455656 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.455667 kubelet[2558]: E1108 00:47:25.455667 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.456082 kubelet[2558]: E1108 00:47:25.456058 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.456082 kubelet[2558]: W1108 00:47:25.456074 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.456082 kubelet[2558]: E1108 00:47:25.456084 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.457268 kubelet[2558]: E1108 00:47:25.456444 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.457268 kubelet[2558]: W1108 00:47:25.456456 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.457268 kubelet[2558]: E1108 00:47:25.456466 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.457268 kubelet[2558]: E1108 00:47:25.456749 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.457268 kubelet[2558]: W1108 00:47:25.456758 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.457268 kubelet[2558]: E1108 00:47:25.456767 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.457268 kubelet[2558]: E1108 00:47:25.457018 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.457268 kubelet[2558]: W1108 00:47:25.457027 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.457268 kubelet[2558]: E1108 00:47:25.457036 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.457508 kubelet[2558]: E1108 00:47:25.457324 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.457508 kubelet[2558]: W1108 00:47:25.457333 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.457508 kubelet[2558]: E1108 00:47:25.457342 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.457809 kubelet[2558]: E1108 00:47:25.457685 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.457809 kubelet[2558]: W1108 00:47:25.457699 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.457809 kubelet[2558]: E1108 00:47:25.457708 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.458576 kubelet[2558]: E1108 00:47:25.458503 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.458576 kubelet[2558]: W1108 00:47:25.458517 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.458576 kubelet[2558]: E1108 00:47:25.458543 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.458869 kubelet[2558]: E1108 00:47:25.458851 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.458869 kubelet[2558]: W1108 00:47:25.458861 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.458869 kubelet[2558]: E1108 00:47:25.458870 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.460161 kubelet[2558]: E1108 00:47:25.459225 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.460161 kubelet[2558]: W1108 00:47:25.459237 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.460161 kubelet[2558]: E1108 00:47:25.459260 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.460161 kubelet[2558]: E1108 00:47:25.459761 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.460161 kubelet[2558]: W1108 00:47:25.459770 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.460161 kubelet[2558]: E1108 00:47:25.459780 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.460161 kubelet[2558]: E1108 00:47:25.460084 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.460161 kubelet[2558]: W1108 00:47:25.460094 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.460161 kubelet[2558]: E1108 00:47:25.460103 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.460427 kubelet[2558]: E1108 00:47:25.460414 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.460456 kubelet[2558]: W1108 00:47:25.460426 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.460456 kubelet[2558]: E1108 00:47:25.460449 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.460742 kubelet[2558]: E1108 00:47:25.460696 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.460742 kubelet[2558]: W1108 00:47:25.460725 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.460742 kubelet[2558]: E1108 00:47:25.460734 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.461026 kubelet[2558]: E1108 00:47:25.461003 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.461026 kubelet[2558]: W1108 00:47:25.461018 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.461026 kubelet[2558]: E1108 00:47:25.461027 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.462181 kubelet[2558]: E1108 00:47:25.462123 2558 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:47:25.462241 kubelet[2558]: W1108 00:47:25.462208 2558 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:47:25.462241 kubelet[2558]: E1108 00:47:25.462220 2558 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:47:25.469322 systemd[1]: cri-containerd-c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91.scope: Deactivated successfully. Nov 8 00:47:25.500116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91-rootfs.mount: Deactivated successfully. Nov 8 00:47:25.571058 containerd[1471]: time="2025-11-08T00:47:25.570901896Z" level=info msg="shim disconnected" id=c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91 namespace=k8s.io Nov 8 00:47:25.571058 containerd[1471]: time="2025-11-08T00:47:25.571054820Z" level=warning msg="cleaning up after shim disconnected" id=c1075728e5b8fbf816ae1e57388b5bd82ed7b827164e9528d735934060987f91 namespace=k8s.io Nov 8 00:47:25.571464 containerd[1471]: time="2025-11-08T00:47:25.571068600Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:47:26.385217 kubelet[2558]: E1108 00:47:26.385125 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:26.388041 containerd[1471]: time="2025-11-08T00:47:26.386621353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:47:26.445944 kubelet[2558]: I1108 00:47:26.426623 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-584cc6dfd5-9mln9" podStartSLOduration=3.9088021729999998 podStartE2EDuration="6.426573533s" podCreationTimestamp="2025-11-08 00:47:20 +0000 UTC" firstStartedPulling="2025-11-08 00:47:21.440934962 +0000 UTC m=+27.583490572" lastFinishedPulling="2025-11-08 00:47:23.958706322 +0000 UTC m=+30.101261932" observedRunningTime="2025-11-08 00:47:24.39043274 +0000 UTC m=+30.532988350" watchObservedRunningTime="2025-11-08 00:47:26.426573533 +0000 UTC m=+32.569129133" Nov 8 00:47:27.072068 kubelet[2558]: E1108 00:47:27.071973 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:29.073529 kubelet[2558]: E1108 00:47:29.072825 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:31.094874 kubelet[2558]: E1108 00:47:31.094681 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:31.446197 containerd[1471]: time="2025-11-08T00:47:31.444883624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:31.446197 containerd[1471]: time="2025-11-08T00:47:31.445533914Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:47:31.451229 containerd[1471]: time="2025-11-08T00:47:31.450201527Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:31.456782 containerd[1471]: time="2025-11-08T00:47:31.456701796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:31.459058 containerd[1471]: time="2025-11-08T00:47:31.458376279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.071646075s" Nov 8 00:47:31.459134 containerd[1471]: time="2025-11-08T00:47:31.459063148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:47:31.469887 containerd[1471]: time="2025-11-08T00:47:31.468123272Z" level=info msg="CreateContainer within sandbox \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:47:31.490199 containerd[1471]: time="2025-11-08T00:47:31.490121213Z" level=info msg="CreateContainer within sandbox \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6\"" Nov 8 00:47:31.492031 containerd[1471]: time="2025-11-08T00:47:31.491390230Z" level=info msg="StartContainer for \"44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6\"" Nov 8 00:47:31.688351 systemd[1]: Started cri-containerd-44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6.scope - libcontainer container 44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6. Nov 8 00:47:31.812163 containerd[1471]: time="2025-11-08T00:47:31.810278522Z" level=info msg="StartContainer for \"44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6\" returns successfully" Nov 8 00:47:32.439166 kubelet[2558]: E1108 00:47:32.439104 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:33.073005 kubelet[2558]: E1108 00:47:33.072228 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:33.441771 kubelet[2558]: E1108 00:47:33.441703 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:35.073933 kubelet[2558]: E1108 00:47:35.072424 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:35.186475 containerd[1471]: time="2025-11-08T00:47:35.186368042Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Nov 8 00:47:35.190326 systemd[1]: cri-containerd-44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6.scope: Deactivated successfully. Nov 8 00:47:35.190768 systemd[1]: cri-containerd-44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6.scope: Consumed 3.541s CPU time. Nov 8 00:47:35.229313 kubelet[2558]: I1108 00:47:35.227967 2558 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:47:35.277880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6-rootfs.mount: Deactivated successfully. Nov 8 00:47:35.322328 systemd[1]: Created slice kubepods-burstable-pod86a3264a_cc32_4906_a772_6e54de93c42f.slice - libcontainer container kubepods-burstable-pod86a3264a_cc32_4906_a772_6e54de93c42f.slice. Nov 8 00:47:35.357561 systemd[1]: Created slice kubepods-burstable-podb7389023_6931_4371_ab96_cba907ffb0fd.slice - libcontainer container kubepods-burstable-podb7389023_6931_4371_ab96_cba907ffb0fd.slice. Nov 8 00:47:35.359863 containerd[1471]: time="2025-11-08T00:47:35.358914965Z" level=info msg="shim disconnected" id=44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6 namespace=k8s.io Nov 8 00:47:35.359863 containerd[1471]: time="2025-11-08T00:47:35.359040107Z" level=warning msg="cleaning up after shim disconnected" id=44a4baddf876f3926016f011367dda46789aa566692646bccec54b363ad2b1f6 namespace=k8s.io Nov 8 00:47:35.359863 containerd[1471]: time="2025-11-08T00:47:35.359054947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:47:35.372367 systemd[1]: Created slice kubepods-besteffort-pod0f272084_e1d2_446c_8416_88d0ec3d2c2e.slice - libcontainer container kubepods-besteffort-pod0f272084_e1d2_446c_8416_88d0ec3d2c2e.slice. Nov 8 00:47:35.392876 systemd[1]: Created slice kubepods-besteffort-pod2e86dafc_d904_4554_bd5d_17e562479113.slice - libcontainer container kubepods-besteffort-pod2e86dafc_d904_4554_bd5d_17e562479113.slice. Nov 8 00:47:35.416825 kubelet[2558]: I1108 00:47:35.416781 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7389023-6931-4371-ab96-cba907ffb0fd-config-volume\") pod \"coredns-66bc5c9577-7vn5c\" (UID: \"b7389023-6931-4371-ab96-cba907ffb0fd\") " pod="kube-system/coredns-66bc5c9577-7vn5c" Nov 8 00:47:35.418871 systemd[1]: Created slice kubepods-besteffort-pod7494706d_b88b_42e3_9001_7633cd787a06.slice - libcontainer container kubepods-besteffort-pod7494706d_b88b_42e3_9001_7633cd787a06.slice. Nov 8 00:47:35.419351 containerd[1471]: time="2025-11-08T00:47:35.419308820Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:47:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:47:35.421750 kubelet[2558]: I1108 00:47:35.421259 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8c8w\" (UniqueName: \"kubernetes.io/projected/b7389023-6931-4371-ab96-cba907ffb0fd-kube-api-access-d8c8w\") pod \"coredns-66bc5c9577-7vn5c\" (UID: \"b7389023-6931-4371-ab96-cba907ffb0fd\") " pod="kube-system/coredns-66bc5c9577-7vn5c" Nov 8 00:47:35.421750 kubelet[2558]: I1108 00:47:35.421292 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f272084-e1d2-446c-8416-88d0ec3d2c2e-tigera-ca-bundle\") pod \"calico-kube-controllers-64fb4f5b7-cmrkg\" (UID: \"0f272084-e1d2-446c-8416-88d0ec3d2c2e\") " pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" Nov 8 00:47:35.421750 kubelet[2558]: I1108 00:47:35.421307 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvwh\" (UniqueName: \"kubernetes.io/projected/0f272084-e1d2-446c-8416-88d0ec3d2c2e-kube-api-access-bkvwh\") pod \"calico-kube-controllers-64fb4f5b7-cmrkg\" (UID: \"0f272084-e1d2-446c-8416-88d0ec3d2c2e\") " pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" Nov 8 00:47:35.421750 kubelet[2558]: I1108 00:47:35.421331 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86a3264a-cc32-4906-a772-6e54de93c42f-config-volume\") pod \"coredns-66bc5c9577-mqcsd\" (UID: \"86a3264a-cc32-4906-a772-6e54de93c42f\") " pod="kube-system/coredns-66bc5c9577-mqcsd" Nov 8 00:47:35.421750 kubelet[2558]: I1108 00:47:35.421353 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qshk\" (UniqueName: \"kubernetes.io/projected/86a3264a-cc32-4906-a772-6e54de93c42f-kube-api-access-7qshk\") pod \"coredns-66bc5c9577-mqcsd\" (UID: \"86a3264a-cc32-4906-a772-6e54de93c42f\") " pod="kube-system/coredns-66bc5c9577-mqcsd" Nov 8 00:47:35.422130 systemd[1]: Created slice kubepods-besteffort-podb94ad5eb_b32f_4d7b_9d8f_b926b3db8f9d.slice - libcontainer container kubepods-besteffort-podb94ad5eb_b32f_4d7b_9d8f_b926b3db8f9d.slice. Nov 8 00:47:35.434770 systemd[1]: Created slice kubepods-besteffort-pod91a10bd0_ee88_4b71_90ab_bbe7e6569a64.slice - libcontainer container kubepods-besteffort-pod91a10bd0_ee88_4b71_90ab_bbe7e6569a64.slice. Nov 8 00:47:35.456170 kubelet[2558]: E1108 00:47:35.454189 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:35.460303 containerd[1471]: time="2025-11-08T00:47:35.460269483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:47:35.521872 kubelet[2558]: I1108 00:47:35.521829 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-ca-bundle\") pod \"whisker-77cdfbb855-ghqpc\" (UID: \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\") " pod="calico-system/whisker-77cdfbb855-ghqpc" Nov 8 00:47:35.524958 kubelet[2558]: I1108 00:47:35.524089 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpdd8\" (UniqueName: \"kubernetes.io/projected/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-kube-api-access-vpdd8\") pod \"whisker-77cdfbb855-ghqpc\" (UID: \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\") " pod="calico-system/whisker-77cdfbb855-ghqpc" Nov 8 00:47:35.524958 kubelet[2558]: I1108 00:47:35.524117 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqrp5\" (UniqueName: \"kubernetes.io/projected/2e86dafc-d904-4554-bd5d-17e562479113-kube-api-access-hqrp5\") pod \"calico-apiserver-5866ffd9dc-wrnwt\" (UID: \"2e86dafc-d904-4554-bd5d-17e562479113\") " pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" Nov 8 00:47:35.524958 kubelet[2558]: I1108 00:47:35.524163 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7494706d-b88b-42e3-9001-7633cd787a06-goldmane-key-pair\") pod \"goldmane-7c778bb748-wb7lg\" (UID: \"7494706d-b88b-42e3-9001-7633cd787a06\") " pod="calico-system/goldmane-7c778bb748-wb7lg" Nov 8 00:47:35.524958 kubelet[2558]: I1108 00:47:35.524218 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g2rm\" (UniqueName: \"kubernetes.io/projected/91a10bd0-ee88-4b71-90ab-bbe7e6569a64-kube-api-access-7g2rm\") pod \"calico-apiserver-5866ffd9dc-4k97x\" (UID: \"91a10bd0-ee88-4b71-90ab-bbe7e6569a64\") " pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" Nov 8 00:47:35.524958 kubelet[2558]: I1108 00:47:35.524243 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2e86dafc-d904-4554-bd5d-17e562479113-calico-apiserver-certs\") pod \"calico-apiserver-5866ffd9dc-wrnwt\" (UID: \"2e86dafc-d904-4554-bd5d-17e562479113\") " pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" Nov 8 00:47:35.525155 kubelet[2558]: I1108 00:47:35.524272 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-backend-key-pair\") pod \"whisker-77cdfbb855-ghqpc\" (UID: \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\") " pod="calico-system/whisker-77cdfbb855-ghqpc" Nov 8 00:47:35.525155 kubelet[2558]: I1108 00:47:35.524286 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7494706d-b88b-42e3-9001-7633cd787a06-config\") pod \"goldmane-7c778bb748-wb7lg\" (UID: \"7494706d-b88b-42e3-9001-7633cd787a06\") " pod="calico-system/goldmane-7c778bb748-wb7lg" Nov 8 00:47:35.525155 kubelet[2558]: I1108 00:47:35.524304 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7494706d-b88b-42e3-9001-7633cd787a06-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-wb7lg\" (UID: \"7494706d-b88b-42e3-9001-7633cd787a06\") " pod="calico-system/goldmane-7c778bb748-wb7lg" Nov 8 00:47:35.525155 kubelet[2558]: I1108 00:47:35.524318 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91a10bd0-ee88-4b71-90ab-bbe7e6569a64-calico-apiserver-certs\") pod \"calico-apiserver-5866ffd9dc-4k97x\" (UID: \"91a10bd0-ee88-4b71-90ab-bbe7e6569a64\") " pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" Nov 8 00:47:35.525155 kubelet[2558]: I1108 00:47:35.524349 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95k8l\" (UniqueName: \"kubernetes.io/projected/7494706d-b88b-42e3-9001-7633cd787a06-kube-api-access-95k8l\") pod \"goldmane-7c778bb748-wb7lg\" (UID: \"7494706d-b88b-42e3-9001-7633cd787a06\") " pod="calico-system/goldmane-7c778bb748-wb7lg" Nov 8 00:47:35.649171 kubelet[2558]: E1108 00:47:35.647347 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:35.650190 containerd[1471]: time="2025-11-08T00:47:35.649784302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mqcsd,Uid:86a3264a-cc32-4906-a772-6e54de93c42f,Namespace:kube-system,Attempt:0,}" Nov 8 00:47:35.667523 kubelet[2558]: E1108 00:47:35.667487 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:35.677196 containerd[1471]: time="2025-11-08T00:47:35.677157845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7vn5c,Uid:b7389023-6931-4371-ab96-cba907ffb0fd,Namespace:kube-system,Attempt:0,}" Nov 8 00:47:35.686871 containerd[1471]: time="2025-11-08T00:47:35.686845555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fb4f5b7-cmrkg,Uid:0f272084-e1d2-446c-8416-88d0ec3d2c2e,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:35.702691 containerd[1471]: time="2025-11-08T00:47:35.702646579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-wrnwt,Uid:2e86dafc-d904-4554-bd5d-17e562479113,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:47:35.737474 containerd[1471]: time="2025-11-08T00:47:35.737355748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77cdfbb855-ghqpc,Uid:b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:35.738413 containerd[1471]: time="2025-11-08T00:47:35.737783962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wb7lg,Uid:7494706d-b88b-42e3-9001-7633cd787a06,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:35.741125 containerd[1471]: time="2025-11-08T00:47:35.741103966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-4k97x,Uid:91a10bd0-ee88-4b71-90ab-bbe7e6569a64,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:47:36.033784 containerd[1471]: time="2025-11-08T00:47:36.032909090Z" level=error msg="Failed to destroy network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.148167 containerd[1471]: time="2025-11-08T00:47:36.145804507Z" level=error msg="encountered an error cleaning up failed sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.148167 containerd[1471]: time="2025-11-08T00:47:36.145903288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mqcsd,Uid:86a3264a-cc32-4906-a772-6e54de93c42f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.149179 kubelet[2558]: E1108 00:47:36.149099 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.150102 kubelet[2558]: E1108 00:47:36.150070 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mqcsd" Nov 8 00:47:36.150644 kubelet[2558]: E1108 00:47:36.150427 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-mqcsd" Nov 8 00:47:36.151225 kubelet[2558]: E1108 00:47:36.151180 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-mqcsd_kube-system(86a3264a-cc32-4906-a772-6e54de93c42f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-mqcsd_kube-system(86a3264a-cc32-4906-a772-6e54de93c42f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mqcsd" podUID="86a3264a-cc32-4906-a772-6e54de93c42f" Nov 8 00:47:36.211593 containerd[1471]: time="2025-11-08T00:47:36.211511810Z" level=error msg="Failed to destroy network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.213749 containerd[1471]: time="2025-11-08T00:47:36.213381768Z" level=error msg="encountered an error cleaning up failed sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.213749 containerd[1471]: time="2025-11-08T00:47:36.213458889Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fb4f5b7-cmrkg,Uid:0f272084-e1d2-446c-8416-88d0ec3d2c2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.214058 kubelet[2558]: E1108 00:47:36.213776 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.214058 kubelet[2558]: E1108 00:47:36.213859 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" Nov 8 00:47:36.214058 kubelet[2558]: E1108 00:47:36.213883 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" Nov 8 00:47:36.214239 kubelet[2558]: E1108 00:47:36.213966 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64fb4f5b7-cmrkg_calico-system(0f272084-e1d2-446c-8416-88d0ec3d2c2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64fb4f5b7-cmrkg_calico-system(0f272084-e1d2-446c-8416-88d0ec3d2c2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:47:36.248955 containerd[1471]: time="2025-11-08T00:47:36.248342195Z" level=error msg="Failed to destroy network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.248955 containerd[1471]: time="2025-11-08T00:47:36.248769438Z" level=error msg="encountered an error cleaning up failed sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.248955 containerd[1471]: time="2025-11-08T00:47:36.248835409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wb7lg,Uid:7494706d-b88b-42e3-9001-7633cd787a06,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.249243 kubelet[2558]: E1108 00:47:36.249146 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.249243 kubelet[2558]: E1108 00:47:36.249211 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wb7lg" Nov 8 00:47:36.249346 kubelet[2558]: E1108 00:47:36.249249 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wb7lg" Nov 8 00:47:36.249346 kubelet[2558]: E1108 00:47:36.249315 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-wb7lg_calico-system(7494706d-b88b-42e3-9001-7633cd787a06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-wb7lg_calico-system(7494706d-b88b-42e3-9001-7633cd787a06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:47:36.264335 containerd[1471]: time="2025-11-08T00:47:36.264289448Z" level=error msg="Failed to destroy network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.265214 containerd[1471]: time="2025-11-08T00:47:36.265189477Z" level=error msg="Failed to destroy network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.271165 containerd[1471]: time="2025-11-08T00:47:36.270961132Z" level=error msg="encountered an error cleaning up failed sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.271165 containerd[1471]: time="2025-11-08T00:47:36.271014583Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77cdfbb855-ghqpc,Uid:b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.276180 containerd[1471]: time="2025-11-08T00:47:36.272026552Z" level=error msg="encountered an error cleaning up failed sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.276250 kubelet[2558]: E1108 00:47:36.272091 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.276250 kubelet[2558]: E1108 00:47:36.272169 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77cdfbb855-ghqpc" Nov 8 00:47:36.276250 kubelet[2558]: E1108 00:47:36.272196 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-77cdfbb855-ghqpc" Nov 8 00:47:36.276378 kubelet[2558]: E1108 00:47:36.272273 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-77cdfbb855-ghqpc_calico-system(b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-77cdfbb855-ghqpc_calico-system(b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77cdfbb855-ghqpc" podUID="b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d" Nov 8 00:47:36.294800 containerd[1471]: time="2025-11-08T00:47:36.294668221Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-4k97x,Uid:91a10bd0-ee88-4b71-90ab-bbe7e6569a64,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.299174 kubelet[2558]: E1108 00:47:36.296449 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.299174 kubelet[2558]: E1108 00:47:36.296513 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" Nov 8 00:47:36.299174 kubelet[2558]: E1108 00:47:36.296533 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" Nov 8 00:47:36.299320 kubelet[2558]: E1108 00:47:36.296796 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5866ffd9dc-4k97x_calico-apiserver(91a10bd0-ee88-4b71-90ab-bbe7e6569a64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5866ffd9dc-4k97x_calico-apiserver(91a10bd0-ee88-4b71-90ab-bbe7e6569a64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:47:36.301049 containerd[1471]: time="2025-11-08T00:47:36.301022571Z" level=error msg="Failed to destroy network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.302440 containerd[1471]: time="2025-11-08T00:47:36.302346525Z" level=error msg="encountered an error cleaning up failed sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.302578 containerd[1471]: time="2025-11-08T00:47:36.302546497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7vn5c,Uid:b7389023-6931-4371-ab96-cba907ffb0fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.303330 kubelet[2558]: E1108 00:47:36.302751 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.303330 kubelet[2558]: E1108 00:47:36.302790 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7vn5c" Nov 8 00:47:36.303330 kubelet[2558]: E1108 00:47:36.302813 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7vn5c" Nov 8 00:47:36.303461 kubelet[2558]: E1108 00:47:36.302856 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7vn5c_kube-system(b7389023-6931-4371-ab96-cba907ffb0fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7vn5c_kube-system(b7389023-6931-4371-ab96-cba907ffb0fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7vn5c" podUID="b7389023-6931-4371-ab96-cba907ffb0fd" Nov 8 00:47:36.304550 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb-shm.mount: Deactivated successfully. Nov 8 00:47:36.310095 containerd[1471]: time="2025-11-08T00:47:36.309678436Z" level=error msg="Failed to destroy network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.310095 containerd[1471]: time="2025-11-08T00:47:36.310025208Z" level=error msg="encountered an error cleaning up failed sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.310095 containerd[1471]: time="2025-11-08T00:47:36.310058949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-wrnwt,Uid:2e86dafc-d904-4554-bd5d-17e562479113,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.310503 kubelet[2558]: E1108 00:47:36.310467 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.310643 kubelet[2558]: E1108 00:47:36.310583 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" Nov 8 00:47:36.310643 kubelet[2558]: E1108 00:47:36.310605 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" Nov 8 00:47:36.311373 kubelet[2558]: E1108 00:47:36.310747 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5866ffd9dc-wrnwt_calico-apiserver(2e86dafc-d904-4554-bd5d-17e562479113)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5866ffd9dc-wrnwt_calico-apiserver(2e86dafc-d904-4554-bd5d-17e562479113)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:47:36.311777 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03-shm.mount: Deactivated successfully. Nov 8 00:47:36.463159 kubelet[2558]: I1108 00:47:36.463093 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:36.466593 containerd[1471]: time="2025-11-08T00:47:36.466128391Z" level=info msg="StopPodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\"" Nov 8 00:47:36.466593 containerd[1471]: time="2025-11-08T00:47:36.466333554Z" level=info msg="Ensure that sandbox fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1 in task-service has been cleanup successfully" Nov 8 00:47:36.481515 kubelet[2558]: I1108 00:47:36.481492 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:36.483666 containerd[1471]: time="2025-11-08T00:47:36.483625081Z" level=info msg="StopPodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\"" Nov 8 00:47:36.485019 containerd[1471]: time="2025-11-08T00:47:36.484433258Z" level=info msg="Ensure that sandbox b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca in task-service has been cleanup successfully" Nov 8 00:47:36.487946 kubelet[2558]: I1108 00:47:36.487927 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:36.489797 containerd[1471]: time="2025-11-08T00:47:36.489180714Z" level=info msg="StopPodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\"" Nov 8 00:47:36.489797 containerd[1471]: time="2025-11-08T00:47:36.489309695Z" level=info msg="Ensure that sandbox d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059 in task-service has been cleanup successfully" Nov 8 00:47:36.498870 kubelet[2558]: I1108 00:47:36.493922 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:36.508148 containerd[1471]: time="2025-11-08T00:47:36.508033355Z" level=info msg="StopPodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\"" Nov 8 00:47:36.509076 containerd[1471]: time="2025-11-08T00:47:36.509054645Z" level=info msg="Ensure that sandbox 0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b in task-service has been cleanup successfully" Nov 8 00:47:36.513207 kubelet[2558]: I1108 00:47:36.513181 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:36.515420 containerd[1471]: time="2025-11-08T00:47:36.515312135Z" level=info msg="StopPodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\"" Nov 8 00:47:36.517033 containerd[1471]: time="2025-11-08T00:47:36.516983151Z" level=info msg="Ensure that sandbox 380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03 in task-service has been cleanup successfully" Nov 8 00:47:36.522803 kubelet[2558]: I1108 00:47:36.522782 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:36.524676 containerd[1471]: time="2025-11-08T00:47:36.524640475Z" level=info msg="StopPodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\"" Nov 8 00:47:36.525914 containerd[1471]: time="2025-11-08T00:47:36.525724475Z" level=info msg="Ensure that sandbox 150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb in task-service has been cleanup successfully" Nov 8 00:47:36.529386 kubelet[2558]: I1108 00:47:36.529367 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:36.530973 containerd[1471]: time="2025-11-08T00:47:36.530941505Z" level=info msg="StopPodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\"" Nov 8 00:47:36.531155 containerd[1471]: time="2025-11-08T00:47:36.531109167Z" level=info msg="Ensure that sandbox 3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86 in task-service has been cleanup successfully" Nov 8 00:47:36.650299 containerd[1471]: time="2025-11-08T00:47:36.650222384Z" level=error msg="StopPodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" failed" error="failed to destroy network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.652212 kubelet[2558]: E1108 00:47:36.652173 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:36.652453 kubelet[2558]: E1108 00:47:36.652367 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059"} Nov 8 00:47:36.652568 kubelet[2558]: E1108 00:47:36.652547 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91a10bd0-ee88-4b71-90ab-bbe7e6569a64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.652752 kubelet[2558]: E1108 00:47:36.652728 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91a10bd0-ee88-4b71-90ab-bbe7e6569a64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:47:36.685510 containerd[1471]: time="2025-11-08T00:47:36.685402573Z" level=error msg="StopPodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" failed" error="failed to destroy network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.686230 kubelet[2558]: E1108 00:47:36.686116 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:36.686396 kubelet[2558]: E1108 00:47:36.686374 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca"} Nov 8 00:47:36.686493 kubelet[2558]: E1108 00:47:36.686472 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0f272084-e1d2-446c-8416-88d0ec3d2c2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.686662 kubelet[2558]: E1108 00:47:36.686638 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0f272084-e1d2-446c-8416-88d0ec3d2c2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:47:36.742938 containerd[1471]: time="2025-11-08T00:47:36.742888156Z" level=error msg="StopPodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" failed" error="failed to destroy network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.743382 kubelet[2558]: E1108 00:47:36.743344 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:36.743562 kubelet[2558]: E1108 00:47:36.743524 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b"} Nov 8 00:47:36.743857 kubelet[2558]: E1108 00:47:36.743839 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7494706d-b88b-42e3-9001-7633cd787a06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.744040 kubelet[2558]: E1108 00:47:36.743979 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7494706d-b88b-42e3-9001-7633cd787a06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:47:36.756158 containerd[1471]: time="2025-11-08T00:47:36.756107243Z" level=error msg="StopPodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" failed" error="failed to destroy network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.759170 kubelet[2558]: E1108 00:47:36.758383 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:36.759170 kubelet[2558]: E1108 00:47:36.758665 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86"} Nov 8 00:47:36.759170 kubelet[2558]: E1108 00:47:36.758853 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86a3264a-cc32-4906-a772-6e54de93c42f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.759170 kubelet[2558]: E1108 00:47:36.759025 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86a3264a-cc32-4906-a772-6e54de93c42f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-mqcsd" podUID="86a3264a-cc32-4906-a772-6e54de93c42f" Nov 8 00:47:36.781202 containerd[1471]: time="2025-11-08T00:47:36.780941123Z" level=error msg="StopPodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" failed" error="failed to destroy network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.781359 kubelet[2558]: E1108 00:47:36.781258 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:36.781359 kubelet[2558]: E1108 00:47:36.781311 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1"} Nov 8 00:47:36.781429 kubelet[2558]: E1108 00:47:36.781381 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.781801 kubelet[2558]: E1108 00:47:36.781477 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-77cdfbb855-ghqpc" podUID="b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d" Nov 8 00:47:36.789192 containerd[1471]: time="2025-11-08T00:47:36.789058911Z" level=error msg="StopPodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" failed" error="failed to destroy network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.789556 kubelet[2558]: E1108 00:47:36.789427 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:36.789556 kubelet[2558]: E1108 00:47:36.789469 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03"} Nov 8 00:47:36.789556 kubelet[2558]: E1108 00:47:36.789494 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e86dafc-d904-4554-bd5d-17e562479113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.789556 kubelet[2558]: E1108 00:47:36.789519 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e86dafc-d904-4554-bd5d-17e562479113\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:47:36.810476 containerd[1471]: time="2025-11-08T00:47:36.810409457Z" level=error msg="StopPodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" failed" error="failed to destroy network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:36.810715 kubelet[2558]: E1108 00:47:36.810663 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:36.810823 kubelet[2558]: E1108 00:47:36.810732 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb"} Nov 8 00:47:36.810823 kubelet[2558]: E1108 00:47:36.810778 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7389023-6931-4371-ab96-cba907ffb0fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:36.810823 kubelet[2558]: E1108 00:47:36.810813 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7389023-6931-4371-ab96-cba907ffb0fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7vn5c" podUID="b7389023-6931-4371-ab96-cba907ffb0fd" Nov 8 00:47:37.081118 systemd[1]: Created slice kubepods-besteffort-pod1dbab252_cddb_4b1b_96da_a6419c1af573.slice - libcontainer container kubepods-besteffort-pod1dbab252_cddb_4b1b_96da_a6419c1af573.slice. Nov 8 00:47:37.096048 containerd[1471]: time="2025-11-08T00:47:37.094976564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zs58,Uid:1dbab252-cddb-4b1b-96da-a6419c1af573,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:37.261580 containerd[1471]: time="2025-11-08T00:47:37.261495229Z" level=error msg="Failed to destroy network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:37.262630 containerd[1471]: time="2025-11-08T00:47:37.262060233Z" level=error msg="encountered an error cleaning up failed sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:37.262630 containerd[1471]: time="2025-11-08T00:47:37.262175865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zs58,Uid:1dbab252-cddb-4b1b-96da-a6419c1af573,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:37.262710 kubelet[2558]: E1108 00:47:37.262459 2558 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:37.262710 kubelet[2558]: E1108 00:47:37.262577 2558 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:37.262710 kubelet[2558]: E1108 00:47:37.262614 2558 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7zs58" Nov 8 00:47:37.263098 kubelet[2558]: E1108 00:47:37.262713 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:37.277458 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5-shm.mount: Deactivated successfully. Nov 8 00:47:37.534983 kubelet[2558]: I1108 00:47:37.532757 2558 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:37.535255 containerd[1471]: time="2025-11-08T00:47:37.535218890Z" level=info msg="StopPodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\"" Nov 8 00:47:37.535976 containerd[1471]: time="2025-11-08T00:47:37.535952677Z" level=info msg="Ensure that sandbox 92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5 in task-service has been cleanup successfully" Nov 8 00:47:37.828280 containerd[1471]: time="2025-11-08T00:47:37.828083416Z" level=error msg="StopPodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" failed" error="failed to destroy network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:47:37.828740 kubelet[2558]: E1108 00:47:37.828502 2558 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:37.828959 kubelet[2558]: E1108 00:47:37.828774 2558 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5"} Nov 8 00:47:37.828959 kubelet[2558]: E1108 00:47:37.828912 2558 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1dbab252-cddb-4b1b-96da-a6419c1af573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:47:37.828959 kubelet[2558]: E1108 00:47:37.828954 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1dbab252-cddb-4b1b-96da-a6419c1af573\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:43.880112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3553501375.mount: Deactivated successfully. Nov 8 00:47:43.935516 containerd[1471]: time="2025-11-08T00:47:43.934736930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:43.940044 containerd[1471]: time="2025-11-08T00:47:43.935900117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:47:43.940044 containerd[1471]: time="2025-11-08T00:47:43.936280890Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:43.940044 containerd[1471]: time="2025-11-08T00:47:43.939099886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:47:43.943153 kubelet[2558]: I1108 00:47:43.943075 2558 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:47:43.946219 containerd[1471]: time="2025-11-08T00:47:43.944915259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.481666685s" Nov 8 00:47:43.946219 containerd[1471]: time="2025-11-08T00:47:43.945036980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:47:43.950242 kubelet[2558]: E1108 00:47:43.950202 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:44.001509 containerd[1471]: time="2025-11-08T00:47:44.001040893Z" level=info msg="CreateContainer within sandbox \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:47:44.043118 containerd[1471]: time="2025-11-08T00:47:44.043062097Z" level=info msg="CreateContainer within sandbox \"6258aff89189bf9ececc5dcfda239f1da4a9988c8c53eaaaaa6c22216221a1d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0\"" Nov 8 00:47:44.043665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2340862708.mount: Deactivated successfully. Nov 8 00:47:44.045247 containerd[1471]: time="2025-11-08T00:47:44.044101943Z" level=info msg="StartContainer for \"9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0\"" Nov 8 00:47:44.126334 systemd[1]: Started cri-containerd-9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0.scope - libcontainer container 9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0. Nov 8 00:47:44.207306 containerd[1471]: time="2025-11-08T00:47:44.205429675Z" level=info msg="StartContainer for \"9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0\" returns successfully" Nov 8 00:47:44.450854 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:47:44.451076 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:47:44.606302 containerd[1471]: time="2025-11-08T00:47:44.606100955Z" level=info msg="StopPodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\"" Nov 8 00:47:44.631383 kubelet[2558]: E1108 00:47:44.630952 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:44.631383 kubelet[2558]: E1108 00:47:44.631330 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:44.706810 kubelet[2558]: I1108 00:47:44.706495 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rvxhz" podStartSLOduration=2.235791182 podStartE2EDuration="24.706421122s" podCreationTimestamp="2025-11-08 00:47:20 +0000 UTC" firstStartedPulling="2025-11-08 00:47:21.479845582 +0000 UTC m=+27.622401182" lastFinishedPulling="2025-11-08 00:47:43.950475512 +0000 UTC m=+50.093031122" observedRunningTime="2025-11-08 00:47:44.705266715 +0000 UTC m=+50.847822315" watchObservedRunningTime="2025-11-08 00:47:44.706421122 +0000 UTC m=+50.848976722" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.881 [INFO][3799] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.882 [INFO][3799] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" iface="eth0" netns="/var/run/netns/cni-dadc1515-f477-4cf3-06cf-a7798a77c4f0" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.883 [INFO][3799] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" iface="eth0" netns="/var/run/netns/cni-dadc1515-f477-4cf3-06cf-a7798a77c4f0" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.886 [INFO][3799] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" iface="eth0" netns="/var/run/netns/cni-dadc1515-f477-4cf3-06cf-a7798a77c4f0" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.886 [INFO][3799] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.886 [INFO][3799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.946 [INFO][3828] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.947 [INFO][3828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.947 [INFO][3828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.956 [WARNING][3828] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.956 [INFO][3828] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.957 [INFO][3828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:44.966558 containerd[1471]: 2025-11-08 00:47:44.964 [INFO][3799] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:44.971813 containerd[1471]: time="2025-11-08T00:47:44.967360396Z" level=info msg="TearDown network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" successfully" Nov 8 00:47:44.971813 containerd[1471]: time="2025-11-08T00:47:44.967425996Z" level=info msg="StopPodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" returns successfully" Nov 8 00:47:44.975476 systemd[1]: run-netns-cni\x2ddadc1515\x2df477\x2d4cf3\x2d06cf\x2da7798a77c4f0.mount: Deactivated successfully. Nov 8 00:47:45.050339 kubelet[2558]: I1108 00:47:45.050277 2558 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-ca-bundle\") pod \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\" (UID: \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\") " Nov 8 00:47:45.050339 kubelet[2558]: I1108 00:47:45.050340 2558 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vpdd8\" (UniqueName: \"kubernetes.io/projected/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-kube-api-access-vpdd8\") pod \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\" (UID: \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\") " Nov 8 00:47:45.051131 kubelet[2558]: I1108 00:47:45.050382 2558 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-backend-key-pair\") pod \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\" (UID: \"b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d\") " Nov 8 00:47:45.053024 kubelet[2558]: I1108 00:47:45.052959 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d" (UID: "b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:47:45.063186 kubelet[2558]: I1108 00:47:45.059017 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d" (UID: "b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:47:45.068744 systemd[1]: var-lib-kubelet-pods-b94ad5eb\x2db32f\x2d4d7b\x2d9d8f\x2db926b3db8f9d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:47:45.070969 kubelet[2558]: I1108 00:47:45.070005 2558 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-kube-api-access-vpdd8" (OuterVolumeSpecName: "kube-api-access-vpdd8") pod "b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d" (UID: "b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d"). InnerVolumeSpecName "kube-api-access-vpdd8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:47:45.084376 systemd[1]: var-lib-kubelet-pods-b94ad5eb\x2db32f\x2d4d7b\x2d9d8f\x2db926b3db8f9d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvpdd8.mount: Deactivated successfully. Nov 8 00:47:45.151465 kubelet[2558]: I1108 00:47:45.151392 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-ca-bundle\") on node \"172-239-57-65\" DevicePath \"\"" Nov 8 00:47:45.151465 kubelet[2558]: I1108 00:47:45.151447 2558 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vpdd8\" (UniqueName: \"kubernetes.io/projected/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-kube-api-access-vpdd8\") on node \"172-239-57-65\" DevicePath \"\"" Nov 8 00:47:45.151465 kubelet[2558]: I1108 00:47:45.151458 2558 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d-whisker-backend-key-pair\") on node \"172-239-57-65\" DevicePath \"\"" Nov 8 00:47:45.633386 kubelet[2558]: E1108 00:47:45.633330 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:45.647066 systemd[1]: Removed slice kubepods-besteffort-podb94ad5eb_b32f_4d7b_9d8f_b926b3db8f9d.slice - libcontainer container kubepods-besteffort-podb94ad5eb_b32f_4d7b_9d8f_b926b3db8f9d.slice. Nov 8 00:47:45.731473 systemd[1]: Created slice kubepods-besteffort-pod01ba0641_89d0_49ee_914a_4dc2009268af.slice - libcontainer container kubepods-besteffort-pod01ba0641_89d0_49ee_914a_4dc2009268af.slice. Nov 8 00:47:45.755932 kubelet[2558]: I1108 00:47:45.755872 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01ba0641-89d0-49ee-914a-4dc2009268af-whisker-ca-bundle\") pod \"whisker-db65b7ddc-kmwf2\" (UID: \"01ba0641-89d0-49ee-914a-4dc2009268af\") " pod="calico-system/whisker-db65b7ddc-kmwf2" Nov 8 00:47:45.755932 kubelet[2558]: I1108 00:47:45.755928 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ztp\" (UniqueName: \"kubernetes.io/projected/01ba0641-89d0-49ee-914a-4dc2009268af-kube-api-access-45ztp\") pod \"whisker-db65b7ddc-kmwf2\" (UID: \"01ba0641-89d0-49ee-914a-4dc2009268af\") " pod="calico-system/whisker-db65b7ddc-kmwf2" Nov 8 00:47:45.756113 kubelet[2558]: I1108 00:47:45.755947 2558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/01ba0641-89d0-49ee-914a-4dc2009268af-whisker-backend-key-pair\") pod \"whisker-db65b7ddc-kmwf2\" (UID: \"01ba0641-89d0-49ee-914a-4dc2009268af\") " pod="calico-system/whisker-db65b7ddc-kmwf2" Nov 8 00:47:45.882697 systemd[1]: run-containerd-runc-k8s.io-9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0-runc.vYDQFV.mount: Deactivated successfully. Nov 8 00:47:46.041015 containerd[1471]: time="2025-11-08T00:47:46.040223230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db65b7ddc-kmwf2,Uid:01ba0641-89d0-49ee-914a-4dc2009268af,Namespace:calico-system,Attempt:0,}" Nov 8 00:47:46.089238 kubelet[2558]: I1108 00:47:46.089191 2558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d" path="/var/lib/kubelet/pods/b94ad5eb-b32f-4d7b-9d8f-b926b3db8f9d/volumes" Nov 8 00:47:46.251222 systemd-networkd[1363]: calie6459663c52: Link UP Nov 8 00:47:46.252391 systemd-networkd[1363]: calie6459663c52: Gained carrier Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.115 [INFO][3872] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.134 [INFO][3872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0 whisker-db65b7ddc- calico-system 01ba0641-89d0-49ee-914a-4dc2009268af 949 0 2025-11-08 00:47:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:db65b7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-239-57-65 whisker-db65b7ddc-kmwf2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie6459663c52 [] [] }} ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.134 [INFO][3872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.189 [INFO][3884] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" HandleID="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Workload="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.190 [INFO][3884] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" HandleID="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Workload="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-65", "pod":"whisker-db65b7ddc-kmwf2", "timestamp":"2025-11-08 00:47:46.189339333 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.191 [INFO][3884] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.191 [INFO][3884] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.191 [INFO][3884] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.201 [INFO][3884] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.208 [INFO][3884] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.212 [INFO][3884] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.215 [INFO][3884] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.218 [INFO][3884] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.218 [INFO][3884] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.219 [INFO][3884] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183 Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.225 [INFO][3884] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.233 [INFO][3884] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.1/26] block=192.168.76.0/26 handle="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.233 [INFO][3884] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.1/26] handle="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" host="172-239-57-65" Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.233 [INFO][3884] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:46.290796 containerd[1471]: 2025-11-08 00:47:46.234 [INFO][3884] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.1/26] IPv6=[] ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" HandleID="k8s-pod-network.d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Workload="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.293014 containerd[1471]: 2025-11-08 00:47:46.236 [INFO][3872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0", GenerateName:"whisker-db65b7ddc-", Namespace:"calico-system", SelfLink:"", UID:"01ba0641-89d0-49ee-914a-4dc2009268af", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"db65b7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"whisker-db65b7ddc-kmwf2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie6459663c52", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:46.293014 containerd[1471]: 2025-11-08 00:47:46.237 [INFO][3872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.1/32] ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.293014 containerd[1471]: 2025-11-08 00:47:46.237 [INFO][3872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6459663c52 ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.293014 containerd[1471]: 2025-11-08 00:47:46.256 [INFO][3872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.293014 containerd[1471]: 2025-11-08 00:47:46.256 [INFO][3872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0", GenerateName:"whisker-db65b7ddc-", Namespace:"calico-system", SelfLink:"", UID:"01ba0641-89d0-49ee-914a-4dc2009268af", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"db65b7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183", Pod:"whisker-db65b7ddc-kmwf2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie6459663c52", MAC:"ba:d7:48:f7:05:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:46.293014 containerd[1471]: 2025-11-08 00:47:46.287 [INFO][3872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183" Namespace="calico-system" Pod="whisker-db65b7ddc-kmwf2" WorkloadEndpoint="172--239--57--65-k8s-whisker--db65b7ddc--kmwf2-eth0" Nov 8 00:47:46.356537 containerd[1471]: time="2025-11-08T00:47:46.354735489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:46.357312 containerd[1471]: time="2025-11-08T00:47:46.357266401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:46.358324 containerd[1471]: time="2025-11-08T00:47:46.357331051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:46.358324 containerd[1471]: time="2025-11-08T00:47:46.357579572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:46.427288 systemd[1]: Started cri-containerd-d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183.scope - libcontainer container d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183. Nov 8 00:47:46.587547 containerd[1471]: time="2025-11-08T00:47:46.587510514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db65b7ddc-kmwf2,Uid:01ba0641-89d0-49ee-914a-4dc2009268af,Namespace:calico-system,Attempt:0,} returns sandbox id \"d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183\"" Nov 8 00:47:46.613281 containerd[1471]: time="2025-11-08T00:47:46.611268133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:47:46.766360 containerd[1471]: time="2025-11-08T00:47:46.765050626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:46.766662 containerd[1471]: time="2025-11-08T00:47:46.766602843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:47:46.766759 containerd[1471]: time="2025-11-08T00:47:46.766723683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:47:46.767441 kubelet[2558]: E1108 00:47:46.767370 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:47:46.767565 kubelet[2558]: E1108 00:47:46.767471 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:47:46.767648 kubelet[2558]: E1108 00:47:46.767601 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:46.770130 containerd[1471]: time="2025-11-08T00:47:46.769746147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:47:46.881131 systemd[1]: run-containerd-runc-k8s.io-d23dc77bccc216401e8925bbc2b44d47c94dc8de37d5bb98da51a22a47942183-runc.aHbUTI.mount: Deactivated successfully. Nov 8 00:47:46.901807 containerd[1471]: time="2025-11-08T00:47:46.901569761Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:46.903081 containerd[1471]: time="2025-11-08T00:47:46.902654486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:47:46.903081 containerd[1471]: time="2025-11-08T00:47:46.902962217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:47:46.904100 kubelet[2558]: E1108 00:47:46.904007 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:47:46.904100 kubelet[2558]: E1108 00:47:46.904071 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:47:46.904703 kubelet[2558]: E1108 00:47:46.904432 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:46.905080 kubelet[2558]: E1108 00:47:46.905011 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:47:47.043241 kernel: bpftool[4062]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:47:47.077286 containerd[1471]: time="2025-11-08T00:47:47.077236317Z" level=info msg="StopPodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\"" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.158 [INFO][4072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.158 [INFO][4072] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" iface="eth0" netns="/var/run/netns/cni-4cf20cff-2963-e233-0466-dd63a3613c6a" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.159 [INFO][4072] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" iface="eth0" netns="/var/run/netns/cni-4cf20cff-2963-e233-0466-dd63a3613c6a" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.160 [INFO][4072] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" iface="eth0" netns="/var/run/netns/cni-4cf20cff-2963-e233-0466-dd63a3613c6a" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.160 [INFO][4072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.160 [INFO][4072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.227 [INFO][4080] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.227 [INFO][4080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.227 [INFO][4080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.235 [WARNING][4080] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.236 [INFO][4080] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.238 [INFO][4080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:47.244346 containerd[1471]: 2025-11-08 00:47:47.240 [INFO][4072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:47.244845 containerd[1471]: time="2025-11-08T00:47:47.244561665Z" level=info msg="TearDown network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" successfully" Nov 8 00:47:47.244845 containerd[1471]: time="2025-11-08T00:47:47.244589845Z" level=info msg="StopPodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" returns successfully" Nov 8 00:47:47.252357 systemd[1]: run-netns-cni\x2d4cf20cff\x2d2963\x2de233\x2d0466\x2ddd63a3613c6a.mount: Deactivated successfully. Nov 8 00:47:47.254114 containerd[1471]: time="2025-11-08T00:47:47.253731583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-4k97x,Uid:91a10bd0-ee88-4b71-90ab-bbe7e6569a64,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:47:47.390392 systemd-networkd[1363]: cali37cd8ea7b67: Link UP Nov 8 00:47:47.391972 systemd-networkd[1363]: cali37cd8ea7b67: Gained carrier Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.315 [INFO][4088] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0 calico-apiserver-5866ffd9dc- calico-apiserver 91a10bd0-ee88-4b71-90ab-bbe7e6569a64 963 0 2025-11-08 00:47:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5866ffd9dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-57-65 calico-apiserver-5866ffd9dc-4k97x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali37cd8ea7b67 [] [] }} ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.316 [INFO][4088] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.345 [INFO][4100] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" HandleID="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.345 [INFO][4100] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" HandleID="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-57-65", "pod":"calico-apiserver-5866ffd9dc-4k97x", "timestamp":"2025-11-08 00:47:47.345724001 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.346 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.346 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.346 [INFO][4100] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.352 [INFO][4100] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.359 [INFO][4100] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.363 [INFO][4100] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.365 [INFO][4100] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.367 [INFO][4100] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.367 [INFO][4100] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.368 [INFO][4100] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.375 [INFO][4100] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.381 [INFO][4100] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.2/26] block=192.168.76.0/26 handle="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.381 [INFO][4100] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.2/26] handle="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" host="172-239-57-65" Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.381 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:47.415000 containerd[1471]: 2025-11-08 00:47:47.381 [INFO][4100] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.2/26] IPv6=[] ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" HandleID="k8s-pod-network.f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.415704 containerd[1471]: 2025-11-08 00:47:47.384 [INFO][4088] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"91a10bd0-ee88-4b71-90ab-bbe7e6569a64", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"calico-apiserver-5866ffd9dc-4k97x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37cd8ea7b67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:47.415704 containerd[1471]: 2025-11-08 00:47:47.384 [INFO][4088] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.2/32] ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.415704 containerd[1471]: 2025-11-08 00:47:47.384 [INFO][4088] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali37cd8ea7b67 ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.415704 containerd[1471]: 2025-11-08 00:47:47.393 [INFO][4088] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.415704 containerd[1471]: 2025-11-08 00:47:47.393 [INFO][4088] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"91a10bd0-ee88-4b71-90ab-bbe7e6569a64", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e", Pod:"calico-apiserver-5866ffd9dc-4k97x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37cd8ea7b67", MAC:"ca:a2:15:e0:24:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:47.415704 containerd[1471]: 2025-11-08 00:47:47.410 [INFO][4088] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-4k97x" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:47.455061 containerd[1471]: time="2025-11-08T00:47:47.453956679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:47.455061 containerd[1471]: time="2025-11-08T00:47:47.454019899Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:47.455061 containerd[1471]: time="2025-11-08T00:47:47.454034089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:47.455298 containerd[1471]: time="2025-11-08T00:47:47.454980173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:47.510299 systemd[1]: Started cri-containerd-f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e.scope - libcontainer container f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e. Nov 8 00:47:47.546518 systemd-networkd[1363]: vxlan.calico: Link UP Nov 8 00:47:47.546528 systemd-networkd[1363]: vxlan.calico: Gained carrier Nov 8 00:47:47.621253 containerd[1471]: time="2025-11-08T00:47:47.621213486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-4k97x,Uid:91a10bd0-ee88-4b71-90ab-bbe7e6569a64,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e\"" Nov 8 00:47:47.628606 containerd[1471]: time="2025-11-08T00:47:47.628470197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:47:47.652898 kubelet[2558]: E1108 00:47:47.652844 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:47:47.762412 containerd[1471]: time="2025-11-08T00:47:47.761755199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:47.763178 containerd[1471]: time="2025-11-08T00:47:47.762999305Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:47:47.763178 containerd[1471]: time="2025-11-08T00:47:47.763080385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:47:47.764078 kubelet[2558]: E1108 00:47:47.763421 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:47:47.764078 kubelet[2558]: E1108 00:47:47.763470 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:47:47.764078 kubelet[2558]: E1108 00:47:47.763553 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-4k97x_calico-apiserver(91a10bd0-ee88-4b71-90ab-bbe7e6569a64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:47.764078 kubelet[2558]: E1108 00:47:47.763593 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:47:48.284669 systemd-networkd[1363]: calie6459663c52: Gained IPv6LL Nov 8 00:47:48.477317 systemd-networkd[1363]: cali37cd8ea7b67: Gained IPv6LL Nov 8 00:47:48.652386 kubelet[2558]: E1108 00:47:48.652088 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:47:49.083222 containerd[1471]: time="2025-11-08T00:47:49.073307670Z" level=info msg="StopPodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\"" Nov 8 00:47:49.084756 containerd[1471]: time="2025-11-08T00:47:49.083969238Z" level=info msg="StopPodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\"" Nov 8 00:47:49.086156 containerd[1471]: time="2025-11-08T00:47:49.085257823Z" level=info msg="StopPodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\"" Nov 8 00:47:49.086156 containerd[1471]: time="2025-11-08T00:47:49.085465574Z" level=info msg="StopPodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\"" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.288 [INFO][4266] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.295 [INFO][4266] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" iface="eth0" netns="/var/run/netns/cni-80149f4f-447f-2052-f762-6e2de7ac3b9f" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.295 [INFO][4266] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" iface="eth0" netns="/var/run/netns/cni-80149f4f-447f-2052-f762-6e2de7ac3b9f" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.300 [INFO][4266] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" iface="eth0" netns="/var/run/netns/cni-80149f4f-447f-2052-f762-6e2de7ac3b9f" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.300 [INFO][4266] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.300 [INFO][4266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.386 [INFO][4303] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.386 [INFO][4303] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.386 [INFO][4303] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.409 [WARNING][4303] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.409 [INFO][4303] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.415 [INFO][4303] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:49.454788 containerd[1471]: 2025-11-08 00:47:49.419 [INFO][4266] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:49.460044 containerd[1471]: time="2025-11-08T00:47:49.459171667Z" level=info msg="TearDown network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" successfully" Nov 8 00:47:49.460044 containerd[1471]: time="2025-11-08T00:47:49.459228517Z" level=info msg="StopPodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" returns successfully" Nov 8 00:47:49.464819 systemd[1]: run-netns-cni\x2d80149f4f\x2d447f\x2d2052\x2df762\x2d6e2de7ac3b9f.mount: Deactivated successfully. Nov 8 00:47:49.469243 containerd[1471]: time="2025-11-08T00:47:49.469098582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wb7lg,Uid:7494706d-b88b-42e3-9001-7633cd787a06,Namespace:calico-system,Attempt:1,}" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.250 [INFO][4262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.252 [INFO][4262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" iface="eth0" netns="/var/run/netns/cni-11cd03d8-22bd-bb45-c532-b0eee1f5e4f3" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.255 [INFO][4262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" iface="eth0" netns="/var/run/netns/cni-11cd03d8-22bd-bb45-c532-b0eee1f5e4f3" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.256 [INFO][4262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" iface="eth0" netns="/var/run/netns/cni-11cd03d8-22bd-bb45-c532-b0eee1f5e4f3" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.256 [INFO][4262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.256 [INFO][4262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.421 [INFO][4293] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.421 [INFO][4293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.421 [INFO][4293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.474 [WARNING][4293] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.474 [INFO][4293] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.478 [INFO][4293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:49.491228 containerd[1471]: 2025-11-08 00:47:49.481 [INFO][4262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:49.497775 systemd[1]: run-netns-cni\x2d11cd03d8\x2d22bd\x2dbb45\x2dc532\x2db0eee1f5e4f3.mount: Deactivated successfully. Nov 8 00:47:49.499730 containerd[1471]: time="2025-11-08T00:47:49.499216060Z" level=info msg="TearDown network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" successfully" Nov 8 00:47:49.499730 containerd[1471]: time="2025-11-08T00:47:49.499255071Z" level=info msg="StopPodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" returns successfully" Nov 8 00:47:49.503623 containerd[1471]: time="2025-11-08T00:47:49.502745013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-wrnwt,Uid:2e86dafc-d904-4554-bd5d-17e562479113,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.239 [INFO][4268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.240 [INFO][4268] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" iface="eth0" netns="/var/run/netns/cni-9f0e6856-2571-5369-c261-50c997483785" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.240 [INFO][4268] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" iface="eth0" netns="/var/run/netns/cni-9f0e6856-2571-5369-c261-50c997483785" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.240 [INFO][4268] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" iface="eth0" netns="/var/run/netns/cni-9f0e6856-2571-5369-c261-50c997483785" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.240 [INFO][4268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.240 [INFO][4268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.441 [INFO][4291] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.442 [INFO][4291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.478 [INFO][4291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.515 [WARNING][4291] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.516 [INFO][4291] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.519 [INFO][4291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:49.547423 containerd[1471]: 2025-11-08 00:47:49.540 [INFO][4268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:49.548022 containerd[1471]: time="2025-11-08T00:47:49.547970025Z" level=info msg="TearDown network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" successfully" Nov 8 00:47:49.548022 containerd[1471]: time="2025-11-08T00:47:49.548020876Z" level=info msg="StopPodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" returns successfully" Nov 8 00:47:49.549548 kubelet[2558]: E1108 00:47:49.549503 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:49.552095 containerd[1471]: time="2025-11-08T00:47:49.551767139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7vn5c,Uid:b7389023-6931-4371-ab96-cba907ffb0fd,Namespace:kube-system,Attempt:1,}" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.257 [INFO][4267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.258 [INFO][4267] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" iface="eth0" netns="/var/run/netns/cni-70bd7acb-bdd9-31d7-19b7-dc17e0a5e99f" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.262 [INFO][4267] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" iface="eth0" netns="/var/run/netns/cni-70bd7acb-bdd9-31d7-19b7-dc17e0a5e99f" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.267 [INFO][4267] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" iface="eth0" netns="/var/run/netns/cni-70bd7acb-bdd9-31d7-19b7-dc17e0a5e99f" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.267 [INFO][4267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.267 [INFO][4267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.443 [INFO][4295] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.444 [INFO][4295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.519 [INFO][4295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.537 [WARNING][4295] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.538 [INFO][4295] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.541 [INFO][4295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:49.553344 containerd[1471]: 2025-11-08 00:47:49.547 [INFO][4267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:49.555173 containerd[1471]: time="2025-11-08T00:47:49.554469909Z" level=info msg="TearDown network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" successfully" Nov 8 00:47:49.555173 containerd[1471]: time="2025-11-08T00:47:49.554515319Z" level=info msg="StopPodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" returns successfully" Nov 8 00:47:49.561041 containerd[1471]: time="2025-11-08T00:47:49.560117599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fb4f5b7-cmrkg,Uid:0f272084-e1d2-446c-8416-88d0ec3d2c2e,Namespace:calico-system,Attempt:1,}" Nov 8 00:47:49.566387 systemd-networkd[1363]: vxlan.calico: Gained IPv6LL Nov 8 00:47:49.786560 systemd-networkd[1363]: cali96a222f6e40: Link UP Nov 8 00:47:49.787811 systemd-networkd[1363]: cali96a222f6e40: Gained carrier Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.615 [INFO][4320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0 goldmane-7c778bb748- calico-system 7494706d-b88b-42e3-9001-7633cd787a06 992 0 2025-11-08 00:47:18 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-239-57-65 goldmane-7c778bb748-wb7lg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali96a222f6e40 [] [] }} ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.615 [INFO][4320] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.676 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" HandleID="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.678 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" HandleID="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032dde0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-65", "pod":"goldmane-7c778bb748-wb7lg", "timestamp":"2025-11-08 00:47:49.676885469 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.678 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.681 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.681 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.716 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.726 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.734 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.738 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.744 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.744 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.746 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.752 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.762 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.3/26] block=192.168.76.0/26 handle="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.762 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.3/26] handle="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" host="172-239-57-65" Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.763 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:49.812497 containerd[1471]: 2025-11-08 00:47:49.764 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.3/26] IPv6=[] ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" HandleID="k8s-pod-network.423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.813428 containerd[1471]: 2025-11-08 00:47:49.772 [INFO][4320] cni-plugin/k8s.go 418: Populated endpoint ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7494706d-b88b-42e3-9001-7633cd787a06", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"goldmane-7c778bb748-wb7lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a222f6e40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:49.813428 containerd[1471]: 2025-11-08 00:47:49.774 [INFO][4320] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.3/32] ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.813428 containerd[1471]: 2025-11-08 00:47:49.775 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96a222f6e40 ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.813428 containerd[1471]: 2025-11-08 00:47:49.790 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.813428 containerd[1471]: 2025-11-08 00:47:49.790 [INFO][4320] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7494706d-b88b-42e3-9001-7633cd787a06", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce", Pod:"goldmane-7c778bb748-wb7lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a222f6e40", MAC:"0e:e2:33:79:07:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:49.813428 containerd[1471]: 2025-11-08 00:47:49.808 [INFO][4320] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce" Namespace="calico-system" Pod="goldmane-7c778bb748-wb7lg" WorkloadEndpoint="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:49.857066 containerd[1471]: time="2025-11-08T00:47:49.854353117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:49.857066 containerd[1471]: time="2025-11-08T00:47:49.855184890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:49.857066 containerd[1471]: time="2025-11-08T00:47:49.855209960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:49.857066 containerd[1471]: time="2025-11-08T00:47:49.855311490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:49.892341 systemd[1]: Started cri-containerd-423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce.scope - libcontainer container 423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce. Nov 8 00:47:49.906050 systemd-networkd[1363]: cali1af59411521: Link UP Nov 8 00:47:49.908872 systemd-networkd[1363]: cali1af59411521: Gained carrier Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.626 [INFO][4331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0 calico-apiserver-5866ffd9dc- calico-apiserver 2e86dafc-d904-4554-bd5d-17e562479113 990 0 2025-11-08 00:47:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5866ffd9dc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-239-57-65 calico-apiserver-5866ffd9dc-wrnwt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1af59411521 [] [] }} ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.626 [INFO][4331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.712 [INFO][4364] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" HandleID="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.717 [INFO][4364] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" HandleID="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039dea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172-239-57-65", "pod":"calico-apiserver-5866ffd9dc-wrnwt", "timestamp":"2025-11-08 00:47:49.712790087 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.721 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.762 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.762 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.818 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.830 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.836 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.839 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.843 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.843 [INFO][4364] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.846 [INFO][4364] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.855 [INFO][4364] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.863 [INFO][4364] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.4/26] block=192.168.76.0/26 handle="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.864 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.4/26] handle="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" host="172-239-57-65" Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.864 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:49.940997 containerd[1471]: 2025-11-08 00:47:49.865 [INFO][4364] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.4/26] IPv6=[] ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" HandleID="k8s-pod-network.482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.941834 containerd[1471]: 2025-11-08 00:47:49.885 [INFO][4331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e86dafc-d904-4554-bd5d-17e562479113", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"calico-apiserver-5866ffd9dc-wrnwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1af59411521", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:49.941834 containerd[1471]: 2025-11-08 00:47:49.886 [INFO][4331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.4/32] ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.941834 containerd[1471]: 2025-11-08 00:47:49.886 [INFO][4331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1af59411521 ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.941834 containerd[1471]: 2025-11-08 00:47:49.916 [INFO][4331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:49.941834 containerd[1471]: 2025-11-08 00:47:49.923 [INFO][4331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e86dafc-d904-4554-bd5d-17e562479113", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe", Pod:"calico-apiserver-5866ffd9dc-wrnwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1af59411521", MAC:"fa:3a:a4:37:7c:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:49.941834 containerd[1471]: 2025-11-08 00:47:49.936 [INFO][4331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe" Namespace="calico-apiserver" Pod="calico-apiserver-5866ffd9dc-wrnwt" WorkloadEndpoint="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:50.016517 containerd[1471]: time="2025-11-08T00:47:50.015112870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:50.025694 systemd-networkd[1363]: cali9b72bcf0914: Link UP Nov 8 00:47:50.028167 containerd[1471]: time="2025-11-08T00:47:50.024933872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:50.028167 containerd[1471]: time="2025-11-08T00:47:50.024958352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.028167 containerd[1471]: time="2025-11-08T00:47:50.025061283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.030793 systemd-networkd[1363]: cali9b72bcf0914: Gained carrier Nov 8 00:47:50.072206 containerd[1471]: time="2025-11-08T00:47:50.071369526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wb7lg,Uid:7494706d-b88b-42e3-9001-7633cd787a06,Namespace:calico-system,Attempt:1,} returns sandbox id \"423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce\"" Nov 8 00:47:50.081282 containerd[1471]: time="2025-11-08T00:47:50.080974478Z" level=info msg="StopPodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\"" Nov 8 00:47:50.086083 containerd[1471]: time="2025-11-08T00:47:50.086040665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.714 [INFO][4361] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0 calico-kube-controllers-64fb4f5b7- calico-system 0f272084-e1d2-446c-8416-88d0ec3d2c2e 991 0 2025-11-08 00:47:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64fb4f5b7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-239-57-65 calico-kube-controllers-64fb4f5b7-cmrkg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9b72bcf0914 [] [] }} ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.714 [INFO][4361] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.776 [INFO][4384] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" HandleID="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.781 [INFO][4384] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" HandleID="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333890), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-65", "pod":"calico-kube-controllers-64fb4f5b7-cmrkg", "timestamp":"2025-11-08 00:47:49.776485867 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.781 [INFO][4384] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.865 [INFO][4384] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.867 [INFO][4384] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.915 [INFO][4384] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.928 [INFO][4384] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.944 [INFO][4384] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.949 [INFO][4384] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.953 [INFO][4384] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.953 [INFO][4384] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.954 [INFO][4384] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8 Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.960 [INFO][4384] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.968 [INFO][4384] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.5/26] block=192.168.76.0/26 handle="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.968 [INFO][4384] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.5/26] handle="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" host="172-239-57-65" Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.968 [INFO][4384] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:50.105170 containerd[1471]: 2025-11-08 00:47:49.968 [INFO][4384] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.5/26] IPv6=[] ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" HandleID="k8s-pod-network.c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.105832 containerd[1471]: 2025-11-08 00:47:49.994 [INFO][4361] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0", GenerateName:"calico-kube-controllers-64fb4f5b7-", Namespace:"calico-system", SelfLink:"", UID:"0f272084-e1d2-446c-8416-88d0ec3d2c2e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fb4f5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"calico-kube-controllers-64fb4f5b7-cmrkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b72bcf0914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:50.105832 containerd[1471]: 2025-11-08 00:47:49.994 [INFO][4361] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.5/32] ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.105832 containerd[1471]: 2025-11-08 00:47:49.994 [INFO][4361] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b72bcf0914 ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.105832 containerd[1471]: 2025-11-08 00:47:50.036 [INFO][4361] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.105832 containerd[1471]: 2025-11-08 00:47:50.045 [INFO][4361] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0", GenerateName:"calico-kube-controllers-64fb4f5b7-", Namespace:"calico-system", SelfLink:"", UID:"0f272084-e1d2-446c-8416-88d0ec3d2c2e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fb4f5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8", Pod:"calico-kube-controllers-64fb4f5b7-cmrkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b72bcf0914", MAC:"a2:b4:e5:f1:85:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:50.105832 containerd[1471]: 2025-11-08 00:47:50.070 [INFO][4361] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8" Namespace="calico-system" Pod="calico-kube-controllers-64fb4f5b7-cmrkg" WorkloadEndpoint="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:50.127415 systemd[1]: Started cri-containerd-482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe.scope - libcontainer container 482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe. Nov 8 00:47:50.151452 systemd-networkd[1363]: cali65f8928a040: Link UP Nov 8 00:47:50.153694 systemd-networkd[1363]: cali65f8928a040: Gained carrier Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.738 [INFO][4345] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0 coredns-66bc5c9577- kube-system b7389023-6931-4371-ab96-cba907ffb0fd 989 0 2025-11-08 00:47:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-57-65 coredns-66bc5c9577-7vn5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali65f8928a040 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.739 [INFO][4345] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.800 [INFO][4390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" HandleID="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.800 [INFO][4390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" HandleID="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cfed0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-57-65", "pod":"coredns-66bc5c9577-7vn5c", "timestamp":"2025-11-08 00:47:49.800553714 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.800 [INFO][4390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.972 [INFO][4390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:49.973 [INFO][4390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.015 [INFO][4390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.035 [INFO][4390] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.050 [INFO][4390] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.054 [INFO][4390] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.062 [INFO][4390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.063 [INFO][4390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.069 [INFO][4390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0 Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.092 [INFO][4390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.128 [INFO][4390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.6/26] block=192.168.76.0/26 handle="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.128 [INFO][4390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.6/26] handle="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" host="172-239-57-65" Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.128 [INFO][4390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:50.195803 containerd[1471]: 2025-11-08 00:47:50.129 [INFO][4390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.6/26] IPv6=[] ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" HandleID="k8s-pod-network.db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.197725 containerd[1471]: 2025-11-08 00:47:50.137 [INFO][4345] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b7389023-6931-4371-ab96-cba907ffb0fd", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"coredns-66bc5c9577-7vn5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65f8928a040", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:50.197725 containerd[1471]: 2025-11-08 00:47:50.139 [INFO][4345] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.6/32] ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.197725 containerd[1471]: 2025-11-08 00:47:50.139 [INFO][4345] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65f8928a040 ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.197725 containerd[1471]: 2025-11-08 00:47:50.158 [INFO][4345] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.197725 containerd[1471]: 2025-11-08 00:47:50.165 [INFO][4345] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b7389023-6931-4371-ab96-cba907ffb0fd", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0", Pod:"coredns-66bc5c9577-7vn5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65f8928a040", MAC:"d6:92:e4:91:98:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:50.197725 containerd[1471]: 2025-11-08 00:47:50.183 [INFO][4345] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0" Namespace="kube-system" Pod="coredns-66bc5c9577-7vn5c" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:50.233057 containerd[1471]: time="2025-11-08T00:47:50.232505819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:50.233057 containerd[1471]: time="2025-11-08T00:47:50.232625980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:50.233057 containerd[1471]: time="2025-11-08T00:47:50.232688770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.233057 containerd[1471]: time="2025-11-08T00:47:50.232840330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.264993 containerd[1471]: time="2025-11-08T00:47:50.264353834Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:50.266701 containerd[1471]: time="2025-11-08T00:47:50.266177610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:47:50.267164 containerd[1471]: time="2025-11-08T00:47:50.266859382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:47:50.268306 kubelet[2558]: E1108 00:47:50.268043 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:47:50.268303 systemd[1]: Started cri-containerd-c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8.scope - libcontainer container c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8. Nov 8 00:47:50.269166 kubelet[2558]: E1108 00:47:50.269090 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:47:50.270664 kubelet[2558]: E1108 00:47:50.270309 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wb7lg_calico-system(7494706d-b88b-42e3-9001-7633cd787a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:50.271167 kubelet[2558]: E1108 00:47:50.271034 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:47:50.294601 containerd[1471]: time="2025-11-08T00:47:50.293295930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:50.294601 containerd[1471]: time="2025-11-08T00:47:50.293918262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:50.294601 containerd[1471]: time="2025-11-08T00:47:50.293931222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.294601 containerd[1471]: time="2025-11-08T00:47:50.294033752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.346387 systemd[1]: Started cri-containerd-db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0.scope - libcontainer container db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0. Nov 8 00:47:50.422467 containerd[1471]: time="2025-11-08T00:47:50.422123636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5866ffd9dc-wrnwt,Uid:2e86dafc-d904-4554-bd5d-17e562479113,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe\"" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.286 [INFO][4503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.287 [INFO][4503] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" iface="eth0" netns="/var/run/netns/cni-93c51ecd-715e-2048-5d40-95cab18f79c2" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.288 [INFO][4503] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" iface="eth0" netns="/var/run/netns/cni-93c51ecd-715e-2048-5d40-95cab18f79c2" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.288 [INFO][4503] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" iface="eth0" netns="/var/run/netns/cni-93c51ecd-715e-2048-5d40-95cab18f79c2" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.288 [INFO][4503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.288 [INFO][4503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.397 [INFO][4569] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.397 [INFO][4569] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.397 [INFO][4569] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.408 [WARNING][4569] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.408 [INFO][4569] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.410 [INFO][4569] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:50.425890 containerd[1471]: 2025-11-08 00:47:50.414 [INFO][4503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:50.427549 containerd[1471]: time="2025-11-08T00:47:50.427523374Z" level=info msg="TearDown network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" successfully" Nov 8 00:47:50.427680 containerd[1471]: time="2025-11-08T00:47:50.427654614Z" level=info msg="StopPodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" returns successfully" Nov 8 00:47:50.431969 containerd[1471]: time="2025-11-08T00:47:50.431855208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zs58,Uid:1dbab252-cddb-4b1b-96da-a6419c1af573,Namespace:calico-system,Attempt:1,}" Nov 8 00:47:50.434079 containerd[1471]: time="2025-11-08T00:47:50.434059175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:47:50.475901 systemd[1]: run-netns-cni\x2d93c51ecd\x2d715e\x2d2048\x2d5d40\x2d95cab18f79c2.mount: Deactivated successfully. Nov 8 00:47:50.476115 systemd[1]: run-netns-cni\x2d70bd7acb\x2dbdd9\x2d31d7\x2d19b7\x2ddc17e0a5e99f.mount: Deactivated successfully. Nov 8 00:47:50.476667 systemd[1]: run-netns-cni\x2d9f0e6856\x2d2571\x2d5369\x2dc261\x2d50c997483785.mount: Deactivated successfully. Nov 8 00:47:50.497969 containerd[1471]: time="2025-11-08T00:47:50.497915106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7vn5c,Uid:b7389023-6931-4371-ab96-cba907ffb0fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0\"" Nov 8 00:47:50.501803 kubelet[2558]: E1108 00:47:50.500403 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:50.509549 containerd[1471]: time="2025-11-08T00:47:50.509508455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64fb4f5b7-cmrkg,Uid:0f272084-e1d2-446c-8416-88d0ec3d2c2e,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8\"" Nov 8 00:47:50.516049 containerd[1471]: time="2025-11-08T00:47:50.515543555Z" level=info msg="CreateContainer within sandbox \"db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:47:50.543793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1145785620.mount: Deactivated successfully. Nov 8 00:47:50.545942 containerd[1471]: time="2025-11-08T00:47:50.545904045Z" level=info msg="CreateContainer within sandbox \"db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21226d804647b748c253ac85b350bb71596d3ae0c337343db1f652a0aeed3372\"" Nov 8 00:47:50.548408 containerd[1471]: time="2025-11-08T00:47:50.548180082Z" level=info msg="StartContainer for \"21226d804647b748c253ac85b350bb71596d3ae0c337343db1f652a0aeed3372\"" Nov 8 00:47:50.601069 containerd[1471]: time="2025-11-08T00:47:50.600941517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:50.602399 systemd[1]: Started cri-containerd-21226d804647b748c253ac85b350bb71596d3ae0c337343db1f652a0aeed3372.scope - libcontainer container 21226d804647b748c253ac85b350bb71596d3ae0c337343db1f652a0aeed3372. Nov 8 00:47:50.608546 containerd[1471]: time="2025-11-08T00:47:50.608421542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:47:50.609205 kubelet[2558]: E1108 00:47:50.608816 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:47:50.609205 kubelet[2558]: E1108 00:47:50.608888 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:47:50.609205 kubelet[2558]: E1108 00:47:50.609081 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-wrnwt_calico-apiserver(2e86dafc-d904-4554-bd5d-17e562479113): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:50.610267 kubelet[2558]: E1108 00:47:50.609149 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:47:50.610859 containerd[1471]: time="2025-11-08T00:47:50.610653459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:47:50.611281 containerd[1471]: time="2025-11-08T00:47:50.611114210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:47:50.675404 kubelet[2558]: E1108 00:47:50.674921 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:47:50.693648 kubelet[2558]: E1108 00:47:50.692625 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:47:50.693785 containerd[1471]: time="2025-11-08T00:47:50.693058911Z" level=info msg="StartContainer for \"21226d804647b748c253ac85b350bb71596d3ae0c337343db1f652a0aeed3372\" returns successfully" Nov 8 00:47:50.702751 systemd-networkd[1363]: cali5fa68ff8f64: Link UP Nov 8 00:47:50.707746 systemd-networkd[1363]: cali5fa68ff8f64: Gained carrier Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.566 [INFO][4610] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-csi--node--driver--7zs58-eth0 csi-node-driver- calico-system 1dbab252-cddb-4b1b-96da-a6419c1af573 1012 0 2025-11-08 00:47:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-239-57-65 csi-node-driver-7zs58 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5fa68ff8f64 [] [] }} ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.566 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.625 [INFO][4650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" HandleID="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.625 [INFO][4650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" HandleID="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032b6e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-239-57-65", "pod":"csi-node-driver-7zs58", "timestamp":"2025-11-08 00:47:50.625260587 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.625 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.625 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.625 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.636 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.645 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.656 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.659 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.664 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.664 [INFO][4650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.666 [INFO][4650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0 Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.671 [INFO][4650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.678 [INFO][4650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.7/26] block=192.168.76.0/26 handle="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.678 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.7/26] handle="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" host="172-239-57-65" Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.678 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:50.758493 containerd[1471]: 2025-11-08 00:47:50.678 [INFO][4650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.7/26] IPv6=[] ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" HandleID="k8s-pod-network.7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.759166 containerd[1471]: 2025-11-08 00:47:50.687 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-csi--node--driver--7zs58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1dbab252-cddb-4b1b-96da-a6419c1af573", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"csi-node-driver-7zs58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fa68ff8f64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:50.759166 containerd[1471]: 2025-11-08 00:47:50.689 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.7/32] ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.759166 containerd[1471]: 2025-11-08 00:47:50.689 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fa68ff8f64 ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.759166 containerd[1471]: 2025-11-08 00:47:50.711 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.759166 containerd[1471]: 2025-11-08 00:47:50.711 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-csi--node--driver--7zs58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1dbab252-cddb-4b1b-96da-a6419c1af573", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0", Pod:"csi-node-driver-7zs58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fa68ff8f64", MAC:"e2:e8:96:22:a8:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:50.759166 containerd[1471]: 2025-11-08 00:47:50.749 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0" Namespace="calico-system" Pod="csi-node-driver-7zs58" WorkloadEndpoint="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:50.781560 containerd[1471]: time="2025-11-08T00:47:50.781515004Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:50.782928 containerd[1471]: time="2025-11-08T00:47:50.782794158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:47:50.783082 containerd[1471]: time="2025-11-08T00:47:50.782903998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:47:50.784472 kubelet[2558]: E1108 00:47:50.784408 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:47:50.784637 kubelet[2558]: E1108 00:47:50.784617 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:47:50.784863 kubelet[2558]: E1108 00:47:50.784836 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-64fb4f5b7-cmrkg_calico-system(0f272084-e1d2-446c-8416-88d0ec3d2c2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:50.784989 kubelet[2558]: E1108 00:47:50.784961 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:47:50.796677 containerd[1471]: time="2025-11-08T00:47:50.796572984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:50.797050 containerd[1471]: time="2025-11-08T00:47:50.796814184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:50.797170 containerd[1471]: time="2025-11-08T00:47:50.797103985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.797579 containerd[1471]: time="2025-11-08T00:47:50.797365226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:50.827320 systemd[1]: Started cri-containerd-7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0.scope - libcontainer container 7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0. Nov 8 00:47:50.880104 containerd[1471]: time="2025-11-08T00:47:50.880056750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zs58,Uid:1dbab252-cddb-4b1b-96da-a6419c1af573,Namespace:calico-system,Attempt:1,} returns sandbox id \"7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0\"" Nov 8 00:47:50.883602 containerd[1471]: time="2025-11-08T00:47:50.883581182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:47:51.015217 containerd[1471]: time="2025-11-08T00:47:51.015124002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:51.016229 containerd[1471]: time="2025-11-08T00:47:51.016161365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:47:51.016485 containerd[1471]: time="2025-11-08T00:47:51.016180236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:47:51.016744 kubelet[2558]: E1108 00:47:51.016697 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:47:51.016911 kubelet[2558]: E1108 00:47:51.016755 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:47:51.016911 kubelet[2558]: E1108 00:47:51.016856 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:51.018512 containerd[1471]: time="2025-11-08T00:47:51.018466973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:47:51.073969 containerd[1471]: time="2025-11-08T00:47:51.073789960Z" level=info msg="StopPodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\"" Nov 8 00:47:51.143179 containerd[1471]: time="2025-11-08T00:47:51.142969310Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:47:51.145463 containerd[1471]: time="2025-11-08T00:47:51.143932623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:47:51.145463 containerd[1471]: time="2025-11-08T00:47:51.144431905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:47:51.145560 kubelet[2558]: E1108 00:47:51.144757 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:47:51.145560 kubelet[2558]: E1108 00:47:51.144854 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:47:51.145560 kubelet[2558]: E1108 00:47:51.144997 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:47:51.145775 kubelet[2558]: E1108 00:47:51.145108 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:51.164461 systemd-networkd[1363]: cali9b72bcf0914: Gained IPv6LL Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.136 [INFO][4741] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.137 [INFO][4741] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" iface="eth0" netns="/var/run/netns/cni-dbef097e-3aa5-416c-9af3-30c37ed854e6" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.138 [INFO][4741] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" iface="eth0" netns="/var/run/netns/cni-dbef097e-3aa5-416c-9af3-30c37ed854e6" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.138 [INFO][4741] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" iface="eth0" netns="/var/run/netns/cni-dbef097e-3aa5-416c-9af3-30c37ed854e6" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.138 [INFO][4741] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.138 [INFO][4741] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.185 [INFO][4749] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.185 [INFO][4749] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.185 [INFO][4749] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.194 [WARNING][4749] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.194 [INFO][4749] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.195 [INFO][4749] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:51.203668 containerd[1471]: 2025-11-08 00:47:51.199 [INFO][4741] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:51.204638 containerd[1471]: time="2025-11-08T00:47:51.203928965Z" level=info msg="TearDown network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" successfully" Nov 8 00:47:51.204638 containerd[1471]: time="2025-11-08T00:47:51.203970406Z" level=info msg="StopPodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" returns successfully" Nov 8 00:47:51.207708 kubelet[2558]: E1108 00:47:51.207661 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:51.210285 containerd[1471]: time="2025-11-08T00:47:51.209569123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mqcsd,Uid:86a3264a-cc32-4906-a772-6e54de93c42f,Namespace:kube-system,Attempt:1,}" Nov 8 00:47:51.411603 systemd-networkd[1363]: cali300d083a4f7: Link UP Nov 8 00:47:51.415530 systemd-networkd[1363]: cali300d083a4f7: Gained carrier Nov 8 00:47:51.421507 systemd-networkd[1363]: cali96a222f6e40: Gained IPv6LL Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.303 [INFO][4756] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0 coredns-66bc5c9577- kube-system 86a3264a-cc32-4906-a772-6e54de93c42f 1042 0 2025-11-08 00:47:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-239-57-65 coredns-66bc5c9577-mqcsd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali300d083a4f7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.304 [INFO][4756] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.347 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" HandleID="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.348 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" HandleID="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56d0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-239-57-65", "pod":"coredns-66bc5c9577-mqcsd", "timestamp":"2025-11-08 00:47:51.347770842 +0000 UTC"}, Hostname:"172-239-57-65", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.348 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.348 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.348 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-239-57-65' Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.359 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.367 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.376 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.379 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.384 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.0/26 host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.384 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.0/26 handle="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.386 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2 Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.391 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.0/26 handle="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.402 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.8/26] block=192.168.76.0/26 handle="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.402 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.8/26] handle="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" host="172-239-57-65" Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.402 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:51.440611 containerd[1471]: 2025-11-08 00:47:51.402 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.8/26] IPv6=[] ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" HandleID="k8s-pod-network.71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.441967 containerd[1471]: 2025-11-08 00:47:51.406 [INFO][4756] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a3264a-cc32-4906-a772-6e54de93c42f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"", Pod:"coredns-66bc5c9577-mqcsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d083a4f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:51.441967 containerd[1471]: 2025-11-08 00:47:51.406 [INFO][4756] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.8/32] ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.441967 containerd[1471]: 2025-11-08 00:47:51.406 [INFO][4756] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali300d083a4f7 ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.441967 containerd[1471]: 2025-11-08 00:47:51.410 [INFO][4756] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.441967 containerd[1471]: 2025-11-08 00:47:51.410 [INFO][4756] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a3264a-cc32-4906-a772-6e54de93c42f", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2", Pod:"coredns-66bc5c9577-mqcsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d083a4f7", MAC:"de:b3:ab:73:83:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:51.441967 containerd[1471]: 2025-11-08 00:47:51.436 [INFO][4756] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2" Namespace="kube-system" Pod="coredns-66bc5c9577-mqcsd" WorkloadEndpoint="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:51.470506 systemd[1]: run-netns-cni\x2ddbef097e\x2d3aa5\x2d416c\x2d9af3\x2d30c37ed854e6.mount: Deactivated successfully. Nov 8 00:47:51.492036 containerd[1471]: time="2025-11-08T00:47:51.491681220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:47:51.493068 containerd[1471]: time="2025-11-08T00:47:51.492824263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:47:51.493221 containerd[1471]: time="2025-11-08T00:47:51.493100114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:51.493567 containerd[1471]: time="2025-11-08T00:47:51.493433155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:47:51.549349 systemd[1]: Started cri-containerd-71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2.scope - libcontainer container 71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2. Nov 8 00:47:51.613454 containerd[1471]: time="2025-11-08T00:47:51.613313139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mqcsd,Uid:86a3264a-cc32-4906-a772-6e54de93c42f,Namespace:kube-system,Attempt:1,} returns sandbox id \"71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2\"" Nov 8 00:47:51.615242 kubelet[2558]: E1108 00:47:51.614687 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:51.620111 containerd[1471]: time="2025-11-08T00:47:51.619793998Z" level=info msg="CreateContainer within sandbox \"71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:47:51.641796 containerd[1471]: time="2025-11-08T00:47:51.641749525Z" level=info msg="CreateContainer within sandbox \"71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5c14cefed90dc519188ca227668e709db4e17095b5f86040f407f54f8de119a\"" Nov 8 00:47:51.642242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397848177.mount: Deactivated successfully. Nov 8 00:47:51.644187 containerd[1471]: time="2025-11-08T00:47:51.643442181Z" level=info msg="StartContainer for \"e5c14cefed90dc519188ca227668e709db4e17095b5f86040f407f54f8de119a\"" Nov 8 00:47:51.688473 systemd[1]: Started cri-containerd-e5c14cefed90dc519188ca227668e709db4e17095b5f86040f407f54f8de119a.scope - libcontainer container e5c14cefed90dc519188ca227668e709db4e17095b5f86040f407f54f8de119a. Nov 8 00:47:51.706199 kubelet[2558]: E1108 00:47:51.705508 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:51.711314 kubelet[2558]: E1108 00:47:51.711238 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:47:51.713206 kubelet[2558]: E1108 00:47:51.712879 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:47:51.713905 kubelet[2558]: E1108 00:47:51.713699 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:51.714307 kubelet[2558]: E1108 00:47:51.714236 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:47:51.741557 systemd-networkd[1363]: cali1af59411521: Gained IPv6LL Nov 8 00:47:51.774505 kubelet[2558]: I1108 00:47:51.772826 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7vn5c" podStartSLOduration=50.772783103 podStartE2EDuration="50.772783103s" podCreationTimestamp="2025-11-08 00:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:47:51.747310116 +0000 UTC m=+57.889865716" watchObservedRunningTime="2025-11-08 00:47:51.772783103 +0000 UTC m=+57.915338703" Nov 8 00:47:51.774705 containerd[1471]: time="2025-11-08T00:47:51.773599336Z" level=info msg="StartContainer for \"e5c14cefed90dc519188ca227668e709db4e17095b5f86040f407f54f8de119a\" returns successfully" Nov 8 00:47:51.868969 systemd-networkd[1363]: cali65f8928a040: Gained IPv6LL Nov 8 00:47:51.869416 systemd-networkd[1363]: cali5fa68ff8f64: Gained IPv6LL Nov 8 00:47:52.717864 kubelet[2558]: E1108 00:47:52.717285 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:52.719767 kubelet[2558]: E1108 00:47:52.719641 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:52.721588 kubelet[2558]: E1108 00:47:52.721545 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:47:52.751539 kubelet[2558]: I1108 00:47:52.751439 2558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mqcsd" podStartSLOduration=51.751401176 podStartE2EDuration="51.751401176s" podCreationTimestamp="2025-11-08 00:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:47:52.73128269 +0000 UTC m=+58.873838300" watchObservedRunningTime="2025-11-08 00:47:52.751401176 +0000 UTC m=+58.893956776" Nov 8 00:47:53.406813 systemd-networkd[1363]: cali300d083a4f7: Gained IPv6LL Nov 8 00:47:53.720912 kubelet[2558]: E1108 00:47:53.720660 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:53.720912 kubelet[2558]: E1108 00:47:53.720825 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:54.051041 containerd[1471]: time="2025-11-08T00:47:54.050740954Z" level=info msg="StopPodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\"" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.107 [WARNING][4880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-csi--node--driver--7zs58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1dbab252-cddb-4b1b-96da-a6419c1af573", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0", Pod:"csi-node-driver-7zs58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fa68ff8f64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.111 [INFO][4880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.111 [INFO][4880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" iface="eth0" netns="" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.111 [INFO][4880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.111 [INFO][4880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.136 [INFO][4890] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.136 [INFO][4890] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.136 [INFO][4890] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.144 [WARNING][4890] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.144 [INFO][4890] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.146 [INFO][4890] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.152182 containerd[1471]: 2025-11-08 00:47:54.149 [INFO][4880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.152620 containerd[1471]: time="2025-11-08T00:47:54.152252291Z" level=info msg="TearDown network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" successfully" Nov 8 00:47:54.152620 containerd[1471]: time="2025-11-08T00:47:54.152279051Z" level=info msg="StopPodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" returns successfully" Nov 8 00:47:54.153378 containerd[1471]: time="2025-11-08T00:47:54.153331163Z" level=info msg="RemovePodSandbox for \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\"" Nov 8 00:47:54.153378 containerd[1471]: time="2025-11-08T00:47:54.153371414Z" level=info msg="Forcibly stopping sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\"" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.200 [WARNING][4904] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-csi--node--driver--7zs58-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1dbab252-cddb-4b1b-96da-a6419c1af573", ResourceVersion:"1096", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"7df348ff8f13d7e2015246fe3fb21ef44a02938105dbb3b961288eb8c24c37a0", Pod:"csi-node-driver-7zs58", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5fa68ff8f64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.200 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.200 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" iface="eth0" netns="" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.200 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.200 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.237 [INFO][4911] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.237 [INFO][4911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.237 [INFO][4911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.245 [WARNING][4911] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.245 [INFO][4911] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" HandleID="k8s-pod-network.92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Workload="172--239--57--65-k8s-csi--node--driver--7zs58-eth0" Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.247 [INFO][4911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.256333 containerd[1471]: 2025-11-08 00:47:54.254 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5" Nov 8 00:47:54.257493 containerd[1471]: time="2025-11-08T00:47:54.257037264Z" level=info msg="TearDown network for sandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" successfully" Nov 8 00:47:54.263861 containerd[1471]: time="2025-11-08T00:47:54.263570400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:54.263861 containerd[1471]: time="2025-11-08T00:47:54.263668210Z" level=info msg="RemovePodSandbox \"92d8b0e5ce855b00c9881ce2124f86deafbe755e9cffa152adff60a7c2a2aee5\" returns successfully" Nov 8 00:47:54.265297 containerd[1471]: time="2025-11-08T00:47:54.265124743Z" level=info msg="StopPodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\"" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.304 [WARNING][4926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a3264a-cc32-4906-a772-6e54de93c42f", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2", Pod:"coredns-66bc5c9577-mqcsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d083a4f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.305 [INFO][4926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.305 [INFO][4926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" iface="eth0" netns="" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.306 [INFO][4926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.306 [INFO][4926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.350 [INFO][4933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.350 [INFO][4933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.350 [INFO][4933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.355 [WARNING][4933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.355 [INFO][4933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.357 [INFO][4933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.362557 containerd[1471]: 2025-11-08 00:47:54.360 [INFO][4926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.363175 containerd[1471]: time="2025-11-08T00:47:54.363102251Z" level=info msg="TearDown network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" successfully" Nov 8 00:47:54.363353 containerd[1471]: time="2025-11-08T00:47:54.363239862Z" level=info msg="StopPodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" returns successfully" Nov 8 00:47:54.364176 containerd[1471]: time="2025-11-08T00:47:54.363986883Z" level=info msg="RemovePodSandbox for \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\"" Nov 8 00:47:54.364176 containerd[1471]: time="2025-11-08T00:47:54.364024733Z" level=info msg="Forcibly stopping sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\"" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.400 [WARNING][4947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"86a3264a-cc32-4906-a772-6e54de93c42f", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"71184bcddebfb299159ab1f41c8505f81a796338576ca647e99b5f8530ba5dd2", Pod:"coredns-66bc5c9577-mqcsd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali300d083a4f7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.401 [INFO][4947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.401 [INFO][4947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" iface="eth0" netns="" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.401 [INFO][4947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.402 [INFO][4947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.432 [INFO][4954] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.432 [INFO][4954] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.432 [INFO][4954] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.438 [WARNING][4954] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.438 [INFO][4954] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" HandleID="k8s-pod-network.3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Workload="172--239--57--65-k8s-coredns--66bc5c9577--mqcsd-eth0" Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.440 [INFO][4954] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.445755 containerd[1471]: 2025-11-08 00:47:54.442 [INFO][4947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86" Nov 8 00:47:54.446281 containerd[1471]: time="2025-11-08T00:47:54.445836173Z" level=info msg="TearDown network for sandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" successfully" Nov 8 00:47:54.450319 containerd[1471]: time="2025-11-08T00:47:54.450270774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:54.450392 containerd[1471]: time="2025-11-08T00:47:54.450331184Z" level=info msg="RemovePodSandbox \"3bb79b382be7969bdfdfdaccb4d1f6085cc4c792858408cb1e7753be21980c86\" returns successfully" Nov 8 00:47:54.450841 containerd[1471]: time="2025-11-08T00:47:54.450803565Z" level=info msg="StopPodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\"" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.490 [WARNING][4968] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"91a10bd0-ee88-4b71-90ab-bbe7e6569a64", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e", Pod:"calico-apiserver-5866ffd9dc-4k97x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37cd8ea7b67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.490 [INFO][4968] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.490 [INFO][4968] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" iface="eth0" netns="" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.490 [INFO][4968] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.490 [INFO][4968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.521 [INFO][4975] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.521 [INFO][4975] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.521 [INFO][4975] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.528 [WARNING][4975] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.528 [INFO][4975] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.530 [INFO][4975] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.535714 containerd[1471]: 2025-11-08 00:47:54.532 [INFO][4968] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.536239 containerd[1471]: time="2025-11-08T00:47:54.535793913Z" level=info msg="TearDown network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" successfully" Nov 8 00:47:54.536239 containerd[1471]: time="2025-11-08T00:47:54.535901613Z" level=info msg="StopPodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" returns successfully" Nov 8 00:47:54.537534 containerd[1471]: time="2025-11-08T00:47:54.537113556Z" level=info msg="RemovePodSandbox for \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\"" Nov 8 00:47:54.537534 containerd[1471]: time="2025-11-08T00:47:54.537203396Z" level=info msg="Forcibly stopping sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\"" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.596 [WARNING][4989] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"91a10bd0-ee88-4b71-90ab-bbe7e6569a64", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"f73996e035a087c1f3ce568d4a348ee1edbdfd0d947c2ed06675bd68cdf2c65e", Pod:"calico-apiserver-5866ffd9dc-4k97x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali37cd8ea7b67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.596 [INFO][4989] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.596 [INFO][4989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" iface="eth0" netns="" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.596 [INFO][4989] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.596 [INFO][4989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.631 [INFO][4996] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.632 [INFO][4996] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.632 [INFO][4996] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.640 [WARNING][4996] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.640 [INFO][4996] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" HandleID="k8s-pod-network.d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--4k97x-eth0" Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.641 [INFO][4996] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.647945 containerd[1471]: 2025-11-08 00:47:54.644 [INFO][4989] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059" Nov 8 00:47:54.648811 containerd[1471]: time="2025-11-08T00:47:54.648531546Z" level=info msg="TearDown network for sandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" successfully" Nov 8 00:47:54.653257 containerd[1471]: time="2025-11-08T00:47:54.652863805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:54.653257 containerd[1471]: time="2025-11-08T00:47:54.653129866Z" level=info msg="RemovePodSandbox \"d02c36514c1a8dd163f14e4f18d575d5ea1d41f4e11a7d777c2227a62a4a0059\" returns successfully" Nov 8 00:47:54.653671 containerd[1471]: time="2025-11-08T00:47:54.653640728Z" level=info msg="StopPodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\"" Nov 8 00:47:54.730587 kubelet[2558]: E1108 00:47:54.730513 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.690 [WARNING][5010] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" WorkloadEndpoint="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.690 [INFO][5010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.690 [INFO][5010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" iface="eth0" netns="" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.690 [INFO][5010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.690 [INFO][5010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.715 [INFO][5017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.715 [INFO][5017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.715 [INFO][5017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.724 [WARNING][5017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.724 [INFO][5017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.727 [INFO][5017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.735593 containerd[1471]: 2025-11-08 00:47:54.731 [INFO][5010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.735593 containerd[1471]: time="2025-11-08T00:47:54.735481558Z" level=info msg="TearDown network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" successfully" Nov 8 00:47:54.735593 containerd[1471]: time="2025-11-08T00:47:54.735510498Z" level=info msg="StopPodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" returns successfully" Nov 8 00:47:54.736669 containerd[1471]: time="2025-11-08T00:47:54.736108349Z" level=info msg="RemovePodSandbox for \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\"" Nov 8 00:47:54.736669 containerd[1471]: time="2025-11-08T00:47:54.736129899Z" level=info msg="Forcibly stopping sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\"" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.776 [WARNING][5031] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" WorkloadEndpoint="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.776 [INFO][5031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.776 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" iface="eth0" netns="" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.776 [INFO][5031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.776 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.802 [INFO][5038] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.803 [INFO][5038] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.803 [INFO][5038] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.811 [WARNING][5038] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.811 [INFO][5038] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" HandleID="k8s-pod-network.fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Workload="172--239--57--65-k8s-whisker--77cdfbb855--ghqpc-eth0" Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.814 [INFO][5038] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.822177 containerd[1471]: 2025-11-08 00:47:54.818 [INFO][5031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1" Nov 8 00:47:54.822177 containerd[1471]: time="2025-11-08T00:47:54.820952326Z" level=info msg="TearDown network for sandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" successfully" Nov 8 00:47:54.825513 containerd[1471]: time="2025-11-08T00:47:54.825487347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:54.825655 containerd[1471]: time="2025-11-08T00:47:54.825631838Z" level=info msg="RemovePodSandbox \"fcccd73131f164f97a77768d1d71b1b37f7026856bdbaf8362a34d3ebf6ce9a1\" returns successfully" Nov 8 00:47:54.826621 containerd[1471]: time="2025-11-08T00:47:54.826318109Z" level=info msg="StopPodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\"" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.879 [WARNING][5053] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b7389023-6931-4371-ab96-cba907ffb0fd", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0", Pod:"coredns-66bc5c9577-7vn5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65f8928a040", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.879 [INFO][5053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.880 [INFO][5053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" iface="eth0" netns="" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.880 [INFO][5053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.880 [INFO][5053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.925 [INFO][5061] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.925 [INFO][5061] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.925 [INFO][5061] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.931 [WARNING][5061] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.931 [INFO][5061] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.933 [INFO][5061] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:54.937729 containerd[1471]: 2025-11-08 00:47:54.935 [INFO][5053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:54.937729 containerd[1471]: time="2025-11-08T00:47:54.937477964Z" level=info msg="TearDown network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" successfully" Nov 8 00:47:54.937729 containerd[1471]: time="2025-11-08T00:47:54.937505535Z" level=info msg="StopPodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" returns successfully" Nov 8 00:47:54.938388 containerd[1471]: time="2025-11-08T00:47:54.938071738Z" level=info msg="RemovePodSandbox for \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\"" Nov 8 00:47:54.938388 containerd[1471]: time="2025-11-08T00:47:54.938095639Z" level=info msg="Forcibly stopping sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\"" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:54.977 [WARNING][5076] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b7389023-6931-4371-ab96-cba907ffb0fd", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"db689eb49b12344db91ae3ce63889a944f830b1bad57fd4c1d8ff45b1e6654e0", Pod:"coredns-66bc5c9577-7vn5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali65f8928a040", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:54.977 [INFO][5076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:54.977 [INFO][5076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" iface="eth0" netns="" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:54.977 [INFO][5076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:54.977 [INFO][5076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.009 [INFO][5083] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.009 [INFO][5083] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.009 [INFO][5083] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.015 [WARNING][5083] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.015 [INFO][5083] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" HandleID="k8s-pod-network.150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Workload="172--239--57--65-k8s-coredns--66bc5c9577--7vn5c-eth0" Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.016 [INFO][5083] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.021912 containerd[1471]: 2025-11-08 00:47:55.019 [INFO][5076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb" Nov 8 00:47:55.022478 containerd[1471]: time="2025-11-08T00:47:55.021952433Z" level=info msg="TearDown network for sandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" successfully" Nov 8 00:47:55.025706 containerd[1471]: time="2025-11-08T00:47:55.025679173Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:55.025763 containerd[1471]: time="2025-11-08T00:47:55.025735386Z" level=info msg="RemovePodSandbox \"150df34e02e0a2a8776f81f25643d31d2feb0b43664220c054c6480b44d27afb\" returns successfully" Nov 8 00:47:55.026217 containerd[1471]: time="2025-11-08T00:47:55.026197546Z" level=info msg="StopPodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\"" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.064 [WARNING][5097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e86dafc-d904-4554-bd5d-17e562479113", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe", Pod:"calico-apiserver-5866ffd9dc-wrnwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1af59411521", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.065 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.065 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" iface="eth0" netns="" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.065 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.065 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.091 [INFO][5105] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.091 [INFO][5105] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.091 [INFO][5105] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.100 [WARNING][5105] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.100 [INFO][5105] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.102 [INFO][5105] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.108584 containerd[1471]: 2025-11-08 00:47:55.105 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.112263 containerd[1471]: time="2025-11-08T00:47:55.108684896Z" level=info msg="TearDown network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" successfully" Nov 8 00:47:55.112263 containerd[1471]: time="2025-11-08T00:47:55.108710397Z" level=info msg="StopPodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" returns successfully" Nov 8 00:47:55.112263 containerd[1471]: time="2025-11-08T00:47:55.109745702Z" level=info msg="RemovePodSandbox for \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\"" Nov 8 00:47:55.112263 containerd[1471]: time="2025-11-08T00:47:55.109792694Z" level=info msg="Forcibly stopping sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\"" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.152 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0", GenerateName:"calico-apiserver-5866ffd9dc-", Namespace:"calico-apiserver", SelfLink:"", UID:"2e86dafc-d904-4554-bd5d-17e562479113", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5866ffd9dc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"482ed4ad31216c40929999c144ae22d6bdaf37462b24ee7750ab3d411db5c2fe", Pod:"calico-apiserver-5866ffd9dc-wrnwt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1af59411521", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.153 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.153 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" iface="eth0" netns="" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.153 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.153 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.175 [INFO][5126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.175 [INFO][5126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.175 [INFO][5126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.182 [WARNING][5126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.182 [INFO][5126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" HandleID="k8s-pod-network.380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Workload="172--239--57--65-k8s-calico--apiserver--5866ffd9dc--wrnwt-eth0" Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.183 [INFO][5126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.188623 containerd[1471]: 2025-11-08 00:47:55.185 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03" Nov 8 00:47:55.189084 containerd[1471]: time="2025-11-08T00:47:55.188614287Z" level=info msg="TearDown network for sandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" successfully" Nov 8 00:47:55.192409 containerd[1471]: time="2025-11-08T00:47:55.192371439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:55.192519 containerd[1471]: time="2025-11-08T00:47:55.192462913Z" level=info msg="RemovePodSandbox \"380da0675e6ab75a16c6804f3684e002d67650fc141a44aa3478817f26dfba03\" returns successfully" Nov 8 00:47:55.193382 containerd[1471]: time="2025-11-08T00:47:55.193353831Z" level=info msg="StopPodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\"" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.259 [WARNING][5140] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0", GenerateName:"calico-kube-controllers-64fb4f5b7-", Namespace:"calico-system", SelfLink:"", UID:"0f272084-e1d2-446c-8416-88d0ec3d2c2e", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fb4f5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8", Pod:"calico-kube-controllers-64fb4f5b7-cmrkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b72bcf0914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.260 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.260 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" iface="eth0" netns="" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.260 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.260 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.288 [INFO][5148] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.289 [INFO][5148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.289 [INFO][5148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.297 [WARNING][5148] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.297 [INFO][5148] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.299 [INFO][5148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.305232 containerd[1471]: 2025-11-08 00:47:55.302 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.305999 containerd[1471]: time="2025-11-08T00:47:55.305401857Z" level=info msg="TearDown network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" successfully" Nov 8 00:47:55.305999 containerd[1471]: time="2025-11-08T00:47:55.305430419Z" level=info msg="StopPodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" returns successfully" Nov 8 00:47:55.306452 containerd[1471]: time="2025-11-08T00:47:55.306414701Z" level=info msg="RemovePodSandbox for \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\"" Nov 8 00:47:55.306452 containerd[1471]: time="2025-11-08T00:47:55.306447433Z" level=info msg="Forcibly stopping sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\"" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.350 [WARNING][5163] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0", GenerateName:"calico-kube-controllers-64fb4f5b7-", Namespace:"calico-system", SelfLink:"", UID:"0f272084-e1d2-446c-8416-88d0ec3d2c2e", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64fb4f5b7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"c3d18f75c721deafa21ecef822bc3856ad962fb7aec9ddbbf512d4ff6c40b9e8", Pod:"calico-kube-controllers-64fb4f5b7-cmrkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9b72bcf0914", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.350 [INFO][5163] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.350 [INFO][5163] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" iface="eth0" netns="" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.350 [INFO][5163] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.350 [INFO][5163] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.378 [INFO][5170] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.378 [INFO][5170] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.378 [INFO][5170] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.385 [WARNING][5170] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.385 [INFO][5170] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" HandleID="k8s-pod-network.b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Workload="172--239--57--65-k8s-calico--kube--controllers--64fb4f5b7--cmrkg-eth0" Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.387 [INFO][5170] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.393559 containerd[1471]: 2025-11-08 00:47:55.390 [INFO][5163] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca" Nov 8 00:47:55.394016 containerd[1471]: time="2025-11-08T00:47:55.393606585Z" level=info msg="TearDown network for sandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" successfully" Nov 8 00:47:55.400241 containerd[1471]: time="2025-11-08T00:47:55.399257868Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:55.400241 containerd[1471]: time="2025-11-08T00:47:55.399331542Z" level=info msg="RemovePodSandbox \"b7fa896ee80cc888a3938167612e25f6c91df931ce49473d7891983da04308ca\" returns successfully" Nov 8 00:47:55.400241 containerd[1471]: time="2025-11-08T00:47:55.399930737Z" level=info msg="StopPodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\"" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.440 [WARNING][5184] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7494706d-b88b-42e3-9001-7633cd787a06", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce", Pod:"goldmane-7c778bb748-wb7lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a222f6e40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.440 [INFO][5184] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.440 [INFO][5184] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" iface="eth0" netns="" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.440 [INFO][5184] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.440 [INFO][5184] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.467 [INFO][5191] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.467 [INFO][5191] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.467 [INFO][5191] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.472 [WARNING][5191] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.472 [INFO][5191] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.474 [INFO][5191] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.481450 containerd[1471]: 2025-11-08 00:47:55.477 [INFO][5184] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.481450 containerd[1471]: time="2025-11-08T00:47:55.480424592Z" level=info msg="TearDown network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" successfully" Nov 8 00:47:55.481450 containerd[1471]: time="2025-11-08T00:47:55.480452733Z" level=info msg="StopPodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" returns successfully" Nov 8 00:47:55.482096 containerd[1471]: time="2025-11-08T00:47:55.481773739Z" level=info msg="RemovePodSandbox for \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\"" Nov 8 00:47:55.482096 containerd[1471]: time="2025-11-08T00:47:55.481969038Z" level=info msg="Forcibly stopping sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\"" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.526 [WARNING][5205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7494706d-b88b-42e3-9001-7633cd787a06", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 47, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-239-57-65", ContainerID:"423e84ec00cc643516bcd42e2565c38f7f0d0864d0e5522ba874d1c672f88dce", Pod:"goldmane-7c778bb748-wb7lg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali96a222f6e40", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.527 [INFO][5205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.527 [INFO][5205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" iface="eth0" netns="" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.527 [INFO][5205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.527 [INFO][5205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.578 [INFO][5212] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.578 [INFO][5212] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.578 [INFO][5212] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.585 [WARNING][5212] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.585 [INFO][5212] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" HandleID="k8s-pod-network.0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Workload="172--239--57--65-k8s-goldmane--7c778bb748--wb7lg-eth0" Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.587 [INFO][5212] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:47:55.594258 containerd[1471]: 2025-11-08 00:47:55.590 [INFO][5205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b" Nov 8 00:47:55.594258 containerd[1471]: time="2025-11-08T00:47:55.593064533Z" level=info msg="TearDown network for sandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" successfully" Nov 8 00:47:55.598283 containerd[1471]: time="2025-11-08T00:47:55.597891211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:47:55.598283 containerd[1471]: time="2025-11-08T00:47:55.597951534Z" level=info msg="RemovePodSandbox \"0e1acff1f368251f96548f538cc984a140e76212f9b4f535ef66dba11f79430b\" returns successfully" Nov 8 00:48:01.076497 containerd[1471]: time="2025-11-08T00:48:01.075829464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:48:01.210552 containerd[1471]: time="2025-11-08T00:48:01.210497050Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:01.211659 containerd[1471]: time="2025-11-08T00:48:01.211533647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:48:01.211659 containerd[1471]: time="2025-11-08T00:48:01.211586659Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:48:01.211848 kubelet[2558]: E1108 00:48:01.211777 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:01.212510 kubelet[2558]: E1108 00:48:01.211858 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:01.212510 kubelet[2558]: E1108 00:48:01.211947 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-4k97x_calico-apiserver(91a10bd0-ee88-4b71-90ab-bbe7e6569a64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:01.212510 kubelet[2558]: E1108 00:48:01.211987 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:48:02.075121 containerd[1471]: time="2025-11-08T00:48:02.074871339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:48:02.294207 containerd[1471]: time="2025-11-08T00:48:02.294045544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:02.296120 containerd[1471]: time="2025-11-08T00:48:02.295931500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:48:02.296547 containerd[1471]: time="2025-11-08T00:48:02.296024664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:48:02.297203 kubelet[2558]: E1108 00:48:02.297116 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:48:02.299293 kubelet[2558]: E1108 00:48:02.297205 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:48:02.299293 kubelet[2558]: E1108 00:48:02.297451 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wb7lg_calico-system(7494706d-b88b-42e3-9001-7633cd787a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:02.299293 kubelet[2558]: E1108 00:48:02.297525 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:48:02.299441 containerd[1471]: time="2025-11-08T00:48:02.297947981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:48:02.446373 containerd[1471]: time="2025-11-08T00:48:02.446303655Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:02.447589 containerd[1471]: time="2025-11-08T00:48:02.447402804Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:48:02.447589 containerd[1471]: time="2025-11-08T00:48:02.447475836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:48:02.447708 kubelet[2558]: E1108 00:48:02.447650 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:48:02.447708 kubelet[2558]: E1108 00:48:02.447702 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:48:02.447844 kubelet[2558]: E1108 00:48:02.447779 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:02.449418 containerd[1471]: time="2025-11-08T00:48:02.449281330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:48:02.581951 containerd[1471]: time="2025-11-08T00:48:02.581823125Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:02.583502 containerd[1471]: time="2025-11-08T00:48:02.583438571Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:48:02.583735 containerd[1471]: time="2025-11-08T00:48:02.583563976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:48:02.583944 kubelet[2558]: E1108 00:48:02.583810 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:48:02.583944 kubelet[2558]: E1108 00:48:02.583869 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:48:02.584107 kubelet[2558]: E1108 00:48:02.583989 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:02.584107 kubelet[2558]: E1108 00:48:02.584034 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:48:03.073912 containerd[1471]: time="2025-11-08T00:48:03.073664945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:48:03.227693 containerd[1471]: time="2025-11-08T00:48:03.227609474Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:03.228684 containerd[1471]: time="2025-11-08T00:48:03.228632089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:48:03.228802 containerd[1471]: time="2025-11-08T00:48:03.228738943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:48:03.228954 kubelet[2558]: E1108 00:48:03.228907 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:03.229057 kubelet[2558]: E1108 00:48:03.228968 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:03.229146 kubelet[2558]: E1108 00:48:03.229085 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-wrnwt_calico-apiserver(2e86dafc-d904-4554-bd5d-17e562479113): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:03.229201 kubelet[2558]: E1108 00:48:03.229166 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:48:05.074260 containerd[1471]: time="2025-11-08T00:48:05.073948542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:48:05.228687 containerd[1471]: time="2025-11-08T00:48:05.228309226Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:05.229542 containerd[1471]: time="2025-11-08T00:48:05.229489664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:48:05.229608 containerd[1471]: time="2025-11-08T00:48:05.229578057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:48:05.229956 kubelet[2558]: E1108 00:48:05.229865 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:48:05.231191 kubelet[2558]: E1108 00:48:05.229978 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:48:05.231191 kubelet[2558]: E1108 00:48:05.230124 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-64fb4f5b7-cmrkg_calico-system(0f272084-e1d2-446c-8416-88d0ec3d2c2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:05.231191 kubelet[2558]: E1108 00:48:05.230207 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:48:08.076204 containerd[1471]: time="2025-11-08T00:48:08.075951884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:48:08.225895 containerd[1471]: time="2025-11-08T00:48:08.225779932Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:08.226984 containerd[1471]: time="2025-11-08T00:48:08.226909206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:48:08.227093 containerd[1471]: time="2025-11-08T00:48:08.226915166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:48:08.227441 kubelet[2558]: E1108 00:48:08.227379 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:48:08.227441 kubelet[2558]: E1108 00:48:08.227440 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:48:08.228316 kubelet[2558]: E1108 00:48:08.227523 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:08.229798 containerd[1471]: time="2025-11-08T00:48:08.229729480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:48:08.366486 containerd[1471]: time="2025-11-08T00:48:08.366432285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:08.367937 containerd[1471]: time="2025-11-08T00:48:08.367865409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:48:08.368076 containerd[1471]: time="2025-11-08T00:48:08.367907490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:48:08.368305 kubelet[2558]: E1108 00:48:08.368221 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:48:08.368385 kubelet[2558]: E1108 00:48:08.368329 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:48:08.368479 kubelet[2558]: E1108 00:48:08.368449 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:08.368688 kubelet[2558]: E1108 00:48:08.368521 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:48:09.072768 kubelet[2558]: E1108 00:48:09.072474 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:48:12.072936 kubelet[2558]: E1108 00:48:12.072178 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:48:13.073258 kubelet[2558]: E1108 00:48:13.073088 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:48:14.075817 kubelet[2558]: E1108 00:48:14.075275 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:48:14.077663 kubelet[2558]: E1108 00:48:14.076727 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:48:15.794195 kubelet[2558]: E1108 00:48:15.794047 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:48:16.077989 kubelet[2558]: E1108 00:48:16.077813 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:48:16.079180 kubelet[2558]: E1108 00:48:16.078319 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:48:17.075024 kubelet[2558]: E1108 00:48:17.074494 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:48:24.079468 kubelet[2558]: E1108 00:48:24.079082 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:48:25.085630 containerd[1471]: time="2025-11-08T00:48:25.084747188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:48:25.250841 containerd[1471]: time="2025-11-08T00:48:25.249919465Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:25.254308 containerd[1471]: time="2025-11-08T00:48:25.251739039Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:48:25.254308 containerd[1471]: time="2025-11-08T00:48:25.251882762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:48:25.254488 kubelet[2558]: E1108 00:48:25.253461 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:48:25.254488 kubelet[2558]: E1108 00:48:25.253587 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:48:25.254488 kubelet[2558]: E1108 00:48:25.253889 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wb7lg_calico-system(7494706d-b88b-42e3-9001-7633cd787a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:25.254488 kubelet[2558]: E1108 00:48:25.253922 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:48:25.255426 containerd[1471]: time="2025-11-08T00:48:25.255260577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:48:25.391262 containerd[1471]: time="2025-11-08T00:48:25.389398321Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:25.391262 containerd[1471]: time="2025-11-08T00:48:25.390743477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:48:25.391262 containerd[1471]: time="2025-11-08T00:48:25.390808579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:48:25.392386 kubelet[2558]: E1108 00:48:25.391694 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:48:25.392386 kubelet[2558]: E1108 00:48:25.391774 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:48:25.392386 kubelet[2558]: E1108 00:48:25.391901 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:25.393877 containerd[1471]: time="2025-11-08T00:48:25.393590692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:48:25.544086 containerd[1471]: time="2025-11-08T00:48:25.544030028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:25.545436 containerd[1471]: time="2025-11-08T00:48:25.545211511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:48:25.545436 containerd[1471]: time="2025-11-08T00:48:25.545325013Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:48:25.546376 kubelet[2558]: E1108 00:48:25.545732 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:48:25.546376 kubelet[2558]: E1108 00:48:25.545798 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:48:25.546376 kubelet[2558]: E1108 00:48:25.545901 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:25.546539 kubelet[2558]: E1108 00:48:25.545958 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:48:27.075482 containerd[1471]: time="2025-11-08T00:48:27.075305979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:48:27.202239 containerd[1471]: time="2025-11-08T00:48:27.201709579Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:27.203548 containerd[1471]: time="2025-11-08T00:48:27.203362539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:48:27.203548 containerd[1471]: time="2025-11-08T00:48:27.203472861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:48:27.205524 kubelet[2558]: E1108 00:48:27.203923 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:48:27.205524 kubelet[2558]: E1108 00:48:27.203993 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:48:27.205524 kubelet[2558]: E1108 00:48:27.204103 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-64fb4f5b7-cmrkg_calico-system(0f272084-e1d2-446c-8416-88d0ec3d2c2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:27.205524 kubelet[2558]: E1108 00:48:27.204377 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:48:29.073892 kubelet[2558]: E1108 00:48:29.073591 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:48:30.080685 containerd[1471]: time="2025-11-08T00:48:30.080566633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:48:30.280472 containerd[1471]: time="2025-11-08T00:48:30.280396906Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:30.282038 containerd[1471]: time="2025-11-08T00:48:30.281866891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:48:30.282220 containerd[1471]: time="2025-11-08T00:48:30.282001213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:48:30.282623 kubelet[2558]: E1108 00:48:30.282545 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:30.284036 kubelet[2558]: E1108 00:48:30.282640 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:30.284036 kubelet[2558]: E1108 00:48:30.282795 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-wrnwt_calico-apiserver(2e86dafc-d904-4554-bd5d-17e562479113): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:30.284036 kubelet[2558]: E1108 00:48:30.282972 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:48:31.077012 containerd[1471]: time="2025-11-08T00:48:31.076944470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:48:31.207709 containerd[1471]: time="2025-11-08T00:48:31.207633347Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:31.208856 containerd[1471]: time="2025-11-08T00:48:31.208805206Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:48:31.208961 containerd[1471]: time="2025-11-08T00:48:31.208905898Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:48:31.210230 kubelet[2558]: E1108 00:48:31.209441 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:31.210230 kubelet[2558]: E1108 00:48:31.209571 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:48:31.210230 kubelet[2558]: E1108 00:48:31.209670 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-4k97x_calico-apiserver(91a10bd0-ee88-4b71-90ab-bbe7e6569a64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:31.210230 kubelet[2558]: E1108 00:48:31.209724 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:48:38.082284 kubelet[2558]: E1108 00:48:38.080984 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:48:39.074849 containerd[1471]: time="2025-11-08T00:48:39.074621081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:48:39.231121 containerd[1471]: time="2025-11-08T00:48:39.231058771Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:39.235155 containerd[1471]: time="2025-11-08T00:48:39.233254871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:48:39.235155 containerd[1471]: time="2025-11-08T00:48:39.233580805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:48:39.235261 kubelet[2558]: E1108 00:48:39.234877 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:48:39.235261 kubelet[2558]: E1108 00:48:39.235123 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:48:39.236705 kubelet[2558]: E1108 00:48:39.236651 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:39.240129 containerd[1471]: time="2025-11-08T00:48:39.239904782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:48:39.384497 containerd[1471]: time="2025-11-08T00:48:39.384254778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:48:39.385557 containerd[1471]: time="2025-11-08T00:48:39.385446724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:48:39.385557 containerd[1471]: time="2025-11-08T00:48:39.385523454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:48:39.385810 kubelet[2558]: E1108 00:48:39.385723 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:48:39.385882 kubelet[2558]: E1108 00:48:39.385826 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:48:39.386177 kubelet[2558]: E1108 00:48:39.386117 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:48:39.386789 kubelet[2558]: E1108 00:48:39.386396 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:48:41.080178 kubelet[2558]: E1108 00:48:41.079528 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:48:43.076483 kubelet[2558]: E1108 00:48:43.076325 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:48:43.076483 kubelet[2558]: E1108 00:48:43.076400 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:48:44.078574 kubelet[2558]: E1108 00:48:44.078380 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:48:51.082288 kubelet[2558]: E1108 00:48:51.081649 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:48:52.085918 kubelet[2558]: E1108 00:48:52.085774 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:48:55.075184 kubelet[2558]: E1108 00:48:55.072927 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:48:55.080291 kubelet[2558]: E1108 00:48:55.080008 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:48:55.081000 kubelet[2558]: E1108 00:48:55.080915 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:48:56.074998 kubelet[2558]: E1108 00:48:56.074683 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:48:57.074211 kubelet[2558]: E1108 00:48:57.073929 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:48:58.073463 kubelet[2558]: E1108 00:48:58.072882 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:00.073197 kubelet[2558]: E1108 00:49:00.072994 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:03.075236 kubelet[2558]: E1108 00:49:03.075130 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:49:06.091230 kubelet[2558]: E1108 00:49:06.089987 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:49:06.097012 containerd[1471]: time="2025-11-08T00:49:06.090726484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:49:06.231431 containerd[1471]: time="2025-11-08T00:49:06.231344993Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:06.232871 containerd[1471]: time="2025-11-08T00:49:06.232752104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:49:06.233295 containerd[1471]: time="2025-11-08T00:49:06.232768574Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:49:06.233427 kubelet[2558]: E1108 00:49:06.233339 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:49:06.233525 kubelet[2558]: E1108 00:49:06.233486 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:49:06.233910 kubelet[2558]: E1108 00:49:06.233854 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:06.237123 containerd[1471]: time="2025-11-08T00:49:06.234876541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:49:06.373538 containerd[1471]: time="2025-11-08T00:49:06.372979730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:06.375251 containerd[1471]: time="2025-11-08T00:49:06.374830885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:49:06.375251 containerd[1471]: time="2025-11-08T00:49:06.375127717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:49:06.376338 kubelet[2558]: E1108 00:49:06.375557 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:49:06.376338 kubelet[2558]: E1108 00:49:06.375617 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:49:06.376338 kubelet[2558]: E1108 00:49:06.375876 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wb7lg_calico-system(7494706d-b88b-42e3-9001-7633cd787a06): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:06.376338 kubelet[2558]: E1108 00:49:06.375957 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:49:06.376527 containerd[1471]: time="2025-11-08T00:49:06.375994494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:49:06.505190 containerd[1471]: time="2025-11-08T00:49:06.504918599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:06.508575 containerd[1471]: time="2025-11-08T00:49:06.508206606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:49:06.508575 containerd[1471]: time="2025-11-08T00:49:06.508314117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:49:06.509054 kubelet[2558]: E1108 00:49:06.508830 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:49:06.509054 kubelet[2558]: E1108 00:49:06.508913 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:49:06.509880 kubelet[2558]: E1108 00:49:06.509644 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-db65b7ddc-kmwf2_calico-system(01ba0641-89d0-49ee-914a-4dc2009268af): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:06.509880 kubelet[2558]: E1108 00:49:06.509712 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:49:08.081168 kubelet[2558]: E1108 00:49:08.080079 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:49:09.317584 systemd[1]: Started sshd@7-172.239.57.65:22-147.75.109.163:59156.service - OpenSSH per-connection server daemon (147.75.109.163:59156). Nov 8 00:49:09.679248 sshd[5310]: Accepted publickey for core from 147.75.109.163 port 59156 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:09.686834 sshd[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:09.702242 systemd-logind[1442]: New session 8 of user core. Nov 8 00:49:09.709698 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:49:10.072795 sshd[5310]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:10.082519 systemd-logind[1442]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:49:10.083200 systemd[1]: sshd@7-172.239.57.65:22-147.75.109.163:59156.service: Deactivated successfully. Nov 8 00:49:10.084711 containerd[1471]: time="2025-11-08T00:49:10.084644927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:49:10.089269 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:49:10.092218 systemd-logind[1442]: Removed session 8. Nov 8 00:49:10.226403 containerd[1471]: time="2025-11-08T00:49:10.226302366Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:10.228323 containerd[1471]: time="2025-11-08T00:49:10.227665195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:49:10.228323 containerd[1471]: time="2025-11-08T00:49:10.227786666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:49:10.228434 kubelet[2558]: E1108 00:49:10.228123 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:49:10.228434 kubelet[2558]: E1108 00:49:10.228284 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:49:10.228919 kubelet[2558]: E1108 00:49:10.228470 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-64fb4f5b7-cmrkg_calico-system(0f272084-e1d2-446c-8416-88d0ec3d2c2e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:10.228919 kubelet[2558]: E1108 00:49:10.228539 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:49:14.074079 kubelet[2558]: E1108 00:49:14.073936 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:15.148337 systemd[1]: Started sshd@8-172.239.57.65:22-147.75.109.163:60638.service - OpenSSH per-connection server daemon (147.75.109.163:60638). Nov 8 00:49:15.493439 sshd[5326]: Accepted publickey for core from 147.75.109.163 port 60638 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:15.494029 sshd[5326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:15.502685 systemd-logind[1442]: New session 9 of user core. Nov 8 00:49:15.509400 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:49:15.678574 systemd[1]: run-containerd-runc-k8s.io-9537688cdfdb8e530bf66ad2be85397606e1e784c48bcff7806953f9817700d0-runc.0g6GYo.mount: Deactivated successfully. Nov 8 00:49:15.851407 sshd[5326]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:15.856467 systemd[1]: sshd@8-172.239.57.65:22-147.75.109.163:60638.service: Deactivated successfully. Nov 8 00:49:15.859888 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:49:15.860976 systemd-logind[1442]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:49:15.862516 systemd-logind[1442]: Removed session 9. Nov 8 00:49:16.086331 kubelet[2558]: E1108 00:49:16.086182 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:49:18.081971 containerd[1471]: time="2025-11-08T00:49:18.081707384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:49:18.219818 containerd[1471]: time="2025-11-08T00:49:18.219530260Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:18.221286 containerd[1471]: time="2025-11-08T00:49:18.220480558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:49:18.221286 containerd[1471]: time="2025-11-08T00:49:18.220662928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:49:18.221550 kubelet[2558]: E1108 00:49:18.221007 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:49:18.221550 kubelet[2558]: E1108 00:49:18.221415 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:49:18.223157 kubelet[2558]: E1108 00:49:18.223019 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-wrnwt_calico-apiserver(2e86dafc-d904-4554-bd5d-17e562479113): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:18.223157 kubelet[2558]: E1108 00:49:18.223080 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:49:19.073084 kubelet[2558]: E1108 00:49:19.072785 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:19.073560 kubelet[2558]: E1108 00:49:19.073499 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:49:20.077914 kubelet[2558]: E1108 00:49:20.077607 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:49:20.931462 systemd[1]: Started sshd@9-172.239.57.65:22-147.75.109.163:35426.service - OpenSSH per-connection server daemon (147.75.109.163:35426). Nov 8 00:49:21.073376 kubelet[2558]: E1108 00:49:21.073312 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:21.073893 kubelet[2558]: E1108 00:49:21.073735 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:49:21.292897 sshd[5369]: Accepted publickey for core from 147.75.109.163 port 35426 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:21.295345 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:21.306532 systemd-logind[1442]: New session 10 of user core. Nov 8 00:49:21.314282 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:49:21.712895 sshd[5369]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:21.717951 systemd-logind[1442]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:49:21.719623 systemd[1]: sshd@9-172.239.57.65:22-147.75.109.163:35426.service: Deactivated successfully. Nov 8 00:49:21.726041 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:49:21.728550 systemd-logind[1442]: Removed session 10. Nov 8 00:49:21.788041 systemd[1]: Started sshd@10-172.239.57.65:22-147.75.109.163:35434.service - OpenSSH per-connection server daemon (147.75.109.163:35434). Nov 8 00:49:22.145412 sshd[5385]: Accepted publickey for core from 147.75.109.163 port 35434 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:22.147706 sshd[5385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:22.153919 systemd-logind[1442]: New session 11 of user core. Nov 8 00:49:22.162463 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:49:22.635756 sshd[5385]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:22.644840 systemd-logind[1442]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:49:22.646709 systemd[1]: sshd@10-172.239.57.65:22-147.75.109.163:35434.service: Deactivated successfully. Nov 8 00:49:22.654146 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:49:22.660451 systemd-logind[1442]: Removed session 11. Nov 8 00:49:22.720491 systemd[1]: Started sshd@11-172.239.57.65:22-147.75.109.163:35436.service - OpenSSH per-connection server daemon (147.75.109.163:35436). Nov 8 00:49:23.074124 containerd[1471]: time="2025-11-08T00:49:23.073595197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:49:23.094794 sshd[5397]: Accepted publickey for core from 147.75.109.163 port 35436 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:23.096104 sshd[5397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:23.108432 systemd-logind[1442]: New session 12 of user core. Nov 8 00:49:23.118636 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:49:23.210707 containerd[1471]: time="2025-11-08T00:49:23.210249540Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:23.212724 containerd[1471]: time="2025-11-08T00:49:23.211737269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:49:23.213286 containerd[1471]: time="2025-11-08T00:49:23.213164318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:49:23.213595 kubelet[2558]: E1108 00:49:23.213455 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:49:23.213595 kubelet[2558]: E1108 00:49:23.213549 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:49:23.216308 kubelet[2558]: E1108 00:49:23.214199 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5866ffd9dc-4k97x_calico-apiserver(91a10bd0-ee88-4b71-90ab-bbe7e6569a64): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:23.216308 kubelet[2558]: E1108 00:49:23.214382 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:49:23.532460 sshd[5397]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:23.543537 systemd-logind[1442]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:49:23.547395 systemd[1]: sshd@11-172.239.57.65:22-147.75.109.163:35436.service: Deactivated successfully. Nov 8 00:49:23.557353 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:49:23.558852 systemd-logind[1442]: Removed session 12. Nov 8 00:49:28.079438 containerd[1471]: time="2025-11-08T00:49:28.079311023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:49:28.408940 containerd[1471]: time="2025-11-08T00:49:28.408881766Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:28.410303 containerd[1471]: time="2025-11-08T00:49:28.410249635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:49:28.410634 containerd[1471]: time="2025-11-08T00:49:28.410262805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:49:28.411024 kubelet[2558]: E1108 00:49:28.410922 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:49:28.412529 kubelet[2558]: E1108 00:49:28.411053 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:49:28.412529 kubelet[2558]: E1108 00:49:28.411229 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:28.413316 containerd[1471]: time="2025-11-08T00:49:28.413285443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:49:28.540658 containerd[1471]: time="2025-11-08T00:49:28.540202239Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:49:28.541726 containerd[1471]: time="2025-11-08T00:49:28.541651617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:49:28.542013 containerd[1471]: time="2025-11-08T00:49:28.541792488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:49:28.542084 kubelet[2558]: E1108 00:49:28.541993 2558 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:49:28.542084 kubelet[2558]: E1108 00:49:28.542071 2558 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:49:28.542540 kubelet[2558]: E1108 00:49:28.542276 2558 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7zs58_calico-system(1dbab252-cddb-4b1b-96da-a6419c1af573): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:49:28.542845 kubelet[2558]: E1108 00:49:28.542594 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:49:28.599917 systemd[1]: Started sshd@12-172.239.57.65:22-147.75.109.163:35446.service - OpenSSH per-connection server daemon (147.75.109.163:35446). Nov 8 00:49:28.946129 sshd[5423]: Accepted publickey for core from 147.75.109.163 port 35446 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:28.949759 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:28.958744 systemd-logind[1442]: New session 13 of user core. Nov 8 00:49:28.966261 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:49:29.326466 sshd[5423]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:29.332264 systemd-logind[1442]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:49:29.333915 systemd[1]: sshd@12-172.239.57.65:22-147.75.109.163:35446.service: Deactivated successfully. Nov 8 00:49:29.337491 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:49:29.343842 systemd-logind[1442]: Removed session 13. Nov 8 00:49:29.396402 systemd[1]: Started sshd@13-172.239.57.65:22-147.75.109.163:35462.service - OpenSSH per-connection server daemon (147.75.109.163:35462). Nov 8 00:49:29.732439 sshd[5436]: Accepted publickey for core from 147.75.109.163 port 35462 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:29.734697 sshd[5436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:29.742532 systemd-logind[1442]: New session 14 of user core. Nov 8 00:49:29.751360 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:49:30.216489 sshd[5436]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:30.220690 systemd[1]: sshd@13-172.239.57.65:22-147.75.109.163:35462.service: Deactivated successfully. Nov 8 00:49:30.224107 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:49:30.225798 systemd-logind[1442]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:49:30.228937 systemd-logind[1442]: Removed session 14. Nov 8 00:49:30.283599 systemd[1]: Started sshd@14-172.239.57.65:22-147.75.109.163:49322.service - OpenSSH per-connection server daemon (147.75.109.163:49322). Nov 8 00:49:30.619919 sshd[5447]: Accepted publickey for core from 147.75.109.163 port 49322 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:30.621996 sshd[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:30.628549 systemd-logind[1442]: New session 15 of user core. Nov 8 00:49:30.632329 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:49:31.076624 kubelet[2558]: E1108 00:49:31.075886 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:49:31.823320 sshd[5447]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:31.829982 systemd[1]: sshd@14-172.239.57.65:22-147.75.109.163:49322.service: Deactivated successfully. Nov 8 00:49:31.836347 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:49:31.838717 systemd-logind[1442]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:49:31.841609 systemd-logind[1442]: Removed session 15. Nov 8 00:49:31.897341 systemd[1]: Started sshd@15-172.239.57.65:22-147.75.109.163:49338.service - OpenSSH per-connection server daemon (147.75.109.163:49338). Nov 8 00:49:32.277995 sshd[5464]: Accepted publickey for core from 147.75.109.163 port 49338 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:32.280292 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:32.287299 systemd-logind[1442]: New session 16 of user core. Nov 8 00:49:32.293286 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:49:32.851496 sshd[5464]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:32.856911 systemd-logind[1442]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:49:32.857817 systemd[1]: sshd@15-172.239.57.65:22-147.75.109.163:49338.service: Deactivated successfully. Nov 8 00:49:32.863290 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:49:32.870903 systemd-logind[1442]: Removed session 16. Nov 8 00:49:32.918218 systemd[1]: Started sshd@16-172.239.57.65:22-147.75.109.163:49342.service - OpenSSH per-connection server daemon (147.75.109.163:49342). Nov 8 00:49:33.074341 kubelet[2558]: E1108 00:49:33.073875 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:49:33.258780 sshd[5479]: Accepted publickey for core from 147.75.109.163 port 49342 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:33.258372 sshd[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:33.268199 systemd-logind[1442]: New session 17 of user core. Nov 8 00:49:33.271316 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:49:33.604471 sshd[5479]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:33.610316 systemd-logind[1442]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:49:33.611053 systemd[1]: sshd@16-172.239.57.65:22-147.75.109.163:49342.service: Deactivated successfully. Nov 8 00:49:33.613817 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:49:33.615944 systemd-logind[1442]: Removed session 17. Nov 8 00:49:34.076659 kubelet[2558]: E1108 00:49:34.075425 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:49:35.073692 kubelet[2558]: E1108 00:49:35.073337 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:49:35.076585 kubelet[2558]: E1108 00:49:35.076471 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:49:38.683249 systemd[1]: Started sshd@17-172.239.57.65:22-147.75.109.163:49346.service - OpenSSH per-connection server daemon (147.75.109.163:49346). Nov 8 00:49:39.042763 sshd[5494]: Accepted publickey for core from 147.75.109.163 port 49346 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:39.043776 sshd[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:39.054559 systemd-logind[1442]: New session 18 of user core. Nov 8 00:49:39.063798 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:49:39.403901 sshd[5494]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:39.409078 systemd-logind[1442]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:49:39.410398 systemd[1]: sshd@17-172.239.57.65:22-147.75.109.163:49346.service: Deactivated successfully. Nov 8 00:49:39.413277 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:49:39.414556 systemd-logind[1442]: Removed session 18. Nov 8 00:49:42.082710 kubelet[2558]: E1108 00:49:42.082531 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:49:43.073057 kubelet[2558]: E1108 00:49:43.072976 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:44.480417 systemd[1]: Started sshd@18-172.239.57.65:22-147.75.109.163:38496.service - OpenSSH per-connection server daemon (147.75.109.163:38496). Nov 8 00:49:44.834249 sshd[5506]: Accepted publickey for core from 147.75.109.163 port 38496 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:44.838109 sshd[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:44.844581 systemd-logind[1442]: New session 19 of user core. Nov 8 00:49:44.851384 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:49:45.080209 kubelet[2558]: E1108 00:49:45.078200 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:49:45.080209 kubelet[2558]: E1108 00:49:45.078590 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:49:45.080209 kubelet[2558]: E1108 00:49:45.078659 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:49:45.212042 sshd[5506]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:45.220217 systemd-logind[1442]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:49:45.224159 systemd[1]: sshd@18-172.239.57.65:22-147.75.109.163:38496.service: Deactivated successfully. Nov 8 00:49:45.228230 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:49:45.231724 systemd-logind[1442]: Removed session 19. Nov 8 00:49:48.080183 kubelet[2558]: E1108 00:49:48.078214 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:49:48.087497 kubelet[2558]: E1108 00:49:48.083487 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:49:50.077691 kubelet[2558]: E1108 00:49:50.077585 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:49:50.294043 systemd[1]: Started sshd@19-172.239.57.65:22-147.75.109.163:53762.service - OpenSSH per-connection server daemon (147.75.109.163:53762). Nov 8 00:49:50.657251 sshd[5541]: Accepted publickey for core from 147.75.109.163 port 53762 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:50.661076 sshd[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:50.674494 systemd-logind[1442]: New session 20 of user core. Nov 8 00:49:50.687974 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:49:50.994346 sshd[5541]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:51.000337 systemd[1]: sshd@19-172.239.57.65:22-147.75.109.163:53762.service: Deactivated successfully. Nov 8 00:49:51.004901 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:49:51.005966 systemd-logind[1442]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:49:51.007374 systemd-logind[1442]: Removed session 20. Nov 8 00:49:55.076991 kubelet[2558]: E1108 00:49:55.076871 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7zs58" podUID="1dbab252-cddb-4b1b-96da-a6419c1af573" Nov 8 00:49:56.076715 systemd[1]: Started sshd@20-172.239.57.65:22-147.75.109.163:53770.service - OpenSSH per-connection server daemon (147.75.109.163:53770). Nov 8 00:49:56.083169 kubelet[2558]: E1108 00:49:56.082356 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wb7lg" podUID="7494706d-b88b-42e3-9001-7633cd787a06" Nov 8 00:49:56.474012 sshd[5556]: Accepted publickey for core from 147.75.109.163 port 53770 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:49:56.476310 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:49:56.506125 systemd-logind[1442]: New session 21 of user core. Nov 8 00:49:56.514395 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:49:56.891060 sshd[5556]: pam_unix(sshd:session): session closed for user core Nov 8 00:49:56.898130 systemd-logind[1442]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:49:56.903913 systemd[1]: sshd@20-172.239.57.65:22-147.75.109.163:53770.service: Deactivated successfully. Nov 8 00:49:56.911094 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:49:56.913776 systemd-logind[1442]: Removed session 21. Nov 8 00:49:58.086039 kubelet[2558]: E1108 00:49:58.085658 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-4k97x" podUID="91a10bd0-ee88-4b71-90ab-bbe7e6569a64" Nov 8 00:49:59.075865 kubelet[2558]: E1108 00:49:59.075154 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5866ffd9dc-wrnwt" podUID="2e86dafc-d904-4554-bd5d-17e562479113" Nov 8 00:50:00.073068 kubelet[2558]: E1108 00:50:00.072546 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Nov 8 00:50:01.963538 systemd[1]: Started sshd@21-172.239.57.65:22-147.75.109.163:46298.service - OpenSSH per-connection server daemon (147.75.109.163:46298). Nov 8 00:50:02.076884 kubelet[2558]: E1108 00:50:02.076794 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-64fb4f5b7-cmrkg" podUID="0f272084-e1d2-446c-8416-88d0ec3d2c2e" Nov 8 00:50:02.311794 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 46298 ssh2: RSA SHA256:sQZYQ0sNro4IzoVszOu0pe2WFQssY71k7V++Po1LELo Nov 8 00:50:02.313981 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:50:02.321148 systemd-logind[1442]: New session 22 of user core. Nov 8 00:50:02.327298 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:50:02.649905 sshd[5569]: pam_unix(sshd:session): session closed for user core Nov 8 00:50:02.655306 systemd-logind[1442]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:50:02.656554 systemd[1]: sshd@21-172.239.57.65:22-147.75.109.163:46298.service: Deactivated successfully. Nov 8 00:50:02.659488 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:50:02.662774 systemd-logind[1442]: Removed session 22. Nov 8 00:50:03.080247 kubelet[2558]: E1108 00:50:03.078881 2558 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-db65b7ddc-kmwf2" podUID="01ba0641-89d0-49ee-914a-4dc2009268af" Nov 8 00:50:04.073324 kubelet[2558]: E1108 00:50:04.073261 2558 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20"