Nov 5 04:48:50.289207 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 03:01:50 -00 2025 Nov 5 04:48:50.289232 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:48:50.289245 kernel: BIOS-provided physical RAM map: Nov 5 04:48:50.289252 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 5 04:48:50.289258 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 5 04:48:50.289265 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 5 04:48:50.289273 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 5 04:48:50.289281 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 5 04:48:50.289290 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 5 04:48:50.289299 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 5 04:48:50.289318 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 04:48:50.289327 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 5 04:48:50.289336 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 04:48:50.289345 kernel: NX (Execute Disable) protection: active Nov 5 04:48:50.289354 kernel: APIC: Static calls initialized Nov 5 04:48:50.289364 kernel: SMBIOS 2.8 present. Nov 5 04:48:50.289374 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 5 04:48:50.289382 kernel: DMI: Memory slots populated: 1/1 Nov 5 04:48:50.289389 kernel: Hypervisor detected: KVM Nov 5 04:48:50.289397 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 5 04:48:50.289404 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 04:48:50.289412 kernel: kvm-clock: using sched offset of 4130470338 cycles Nov 5 04:48:50.289420 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 04:48:50.289429 kernel: tsc: Detected 2794.748 MHz processor Nov 5 04:48:50.289439 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 04:48:50.289447 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 04:48:50.289455 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 5 04:48:50.289464 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 5 04:48:50.289472 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 04:48:50.289480 kernel: Using GB pages for direct mapping Nov 5 04:48:50.289488 kernel: ACPI: Early table checksum verification disabled Nov 5 04:48:50.289498 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 5 04:48:50.289506 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289514 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289522 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289529 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 5 04:48:50.289537 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289545 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289555 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289563 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 04:48:50.289574 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 5 04:48:50.289582 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 5 04:48:50.289590 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 5 04:48:50.289600 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 5 04:48:50.289608 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 5 04:48:50.289616 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 5 04:48:50.289624 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 5 04:48:50.289632 kernel: No NUMA configuration found Nov 5 04:48:50.289640 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 5 04:48:50.289648 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 5 04:48:50.289658 kernel: Zone ranges: Nov 5 04:48:50.289667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 04:48:50.289674 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 5 04:48:50.289682 kernel: Normal empty Nov 5 04:48:50.289690 kernel: Device empty Nov 5 04:48:50.289698 kernel: Movable zone start for each node Nov 5 04:48:50.289706 kernel: Early memory node ranges Nov 5 04:48:50.289714 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 5 04:48:50.289724 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 5 04:48:50.289732 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 5 04:48:50.289740 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 04:48:50.289748 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 5 04:48:50.289756 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 5 04:48:50.289767 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 04:48:50.289776 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 04:48:50.289800 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 04:48:50.289809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 04:48:50.289820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 04:48:50.289828 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 04:48:50.289836 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 04:48:50.289844 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 04:48:50.289852 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 04:48:50.289863 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 04:48:50.289871 kernel: TSC deadline timer available Nov 5 04:48:50.289879 kernel: CPU topo: Max. logical packages: 1 Nov 5 04:48:50.289887 kernel: CPU topo: Max. logical dies: 1 Nov 5 04:48:50.289895 kernel: CPU topo: Max. dies per package: 1 Nov 5 04:48:50.289903 kernel: CPU topo: Max. threads per core: 1 Nov 5 04:48:50.289911 kernel: CPU topo: Num. cores per package: 4 Nov 5 04:48:50.289919 kernel: CPU topo: Num. threads per package: 4 Nov 5 04:48:50.289929 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 04:48:50.289937 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 04:48:50.289945 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 04:48:50.289953 kernel: kvm-guest: setup PV sched yield Nov 5 04:48:50.289961 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 5 04:48:50.289970 kernel: Booting paravirtualized kernel on KVM Nov 5 04:48:50.289978 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 04:48:50.289988 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 04:48:50.289997 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 04:48:50.290005 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 04:48:50.290012 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 04:48:50.290020 kernel: kvm-guest: PV spinlocks enabled Nov 5 04:48:50.290028 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 04:48:50.290038 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:48:50.290048 kernel: random: crng init done Nov 5 04:48:50.290056 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 04:48:50.290065 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 04:48:50.290073 kernel: Fallback order for Node 0: 0 Nov 5 04:48:50.290081 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 5 04:48:50.290089 kernel: Policy zone: DMA32 Nov 5 04:48:50.290097 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 04:48:50.290107 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 04:48:50.290115 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 04:48:50.290123 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 04:48:50.290131 kernel: Dynamic Preempt: voluntary Nov 5 04:48:50.290139 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 04:48:50.290152 kernel: rcu: RCU event tracing is enabled. Nov 5 04:48:50.290160 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 04:48:50.290170 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 04:48:50.290182 kernel: Rude variant of Tasks RCU enabled. Nov 5 04:48:50.290190 kernel: Tracing variant of Tasks RCU enabled. Nov 5 04:48:50.290198 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 04:48:50.290206 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 04:48:50.290214 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:48:50.290222 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:48:50.290230 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 04:48:50.290241 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 04:48:50.290249 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 04:48:50.290264 kernel: Console: colour VGA+ 80x25 Nov 5 04:48:50.290275 kernel: printk: legacy console [ttyS0] enabled Nov 5 04:48:50.290283 kernel: ACPI: Core revision 20240827 Nov 5 04:48:50.290292 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 04:48:50.290300 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 04:48:50.290316 kernel: x2apic enabled Nov 5 04:48:50.290324 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 04:48:50.290338 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 04:48:50.290347 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 04:48:50.290355 kernel: kvm-guest: setup PV IPIs Nov 5 04:48:50.290376 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 04:48:50.290388 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 04:48:50.290397 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 04:48:50.290405 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 04:48:50.290413 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 04:48:50.290422 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 04:48:50.290430 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 04:48:50.290439 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 04:48:50.290450 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 04:48:50.290458 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 04:48:50.290467 kernel: active return thunk: retbleed_return_thunk Nov 5 04:48:50.290475 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 04:48:50.290483 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 04:48:50.290492 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 04:48:50.290500 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 04:48:50.290511 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 04:48:50.290520 kernel: active return thunk: srso_return_thunk Nov 5 04:48:50.290528 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 04:48:50.290537 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 04:48:50.290545 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 04:48:50.290554 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 04:48:50.290562 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 04:48:50.290572 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 04:48:50.290581 kernel: Freeing SMP alternatives memory: 32K Nov 5 04:48:50.290589 kernel: pid_max: default: 32768 minimum: 301 Nov 5 04:48:50.290597 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 04:48:50.290606 kernel: landlock: Up and running. Nov 5 04:48:50.290614 kernel: SELinux: Initializing. Nov 5 04:48:50.290625 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 04:48:50.290636 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 04:48:50.290645 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 04:48:50.290653 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 04:48:50.290661 kernel: ... version: 0 Nov 5 04:48:50.290670 kernel: ... bit width: 48 Nov 5 04:48:50.290678 kernel: ... generic registers: 6 Nov 5 04:48:50.290686 kernel: ... value mask: 0000ffffffffffff Nov 5 04:48:50.290697 kernel: ... max period: 00007fffffffffff Nov 5 04:48:50.290705 kernel: ... fixed-purpose events: 0 Nov 5 04:48:50.290713 kernel: ... event mask: 000000000000003f Nov 5 04:48:50.290721 kernel: signal: max sigframe size: 1776 Nov 5 04:48:50.290730 kernel: rcu: Hierarchical SRCU implementation. Nov 5 04:48:50.290738 kernel: rcu: Max phase no-delay instances is 400. Nov 5 04:48:50.290747 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 04:48:50.290757 kernel: smp: Bringing up secondary CPUs ... Nov 5 04:48:50.290765 kernel: smpboot: x86: Booting SMP configuration: Nov 5 04:48:50.290774 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 04:48:50.290782 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 04:48:50.290804 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 04:48:50.290812 kernel: Memory: 2447344K/2571752K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15348K init, 2696K bss, 118472K reserved, 0K cma-reserved) Nov 5 04:48:50.290821 kernel: devtmpfs: initialized Nov 5 04:48:50.290832 kernel: x86/mm: Memory block size: 128MB Nov 5 04:48:50.290840 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 04:48:50.290849 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 04:48:50.290857 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 04:48:50.290868 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 04:48:50.290876 kernel: audit: initializing netlink subsys (disabled) Nov 5 04:48:50.290885 kernel: audit: type=2000 audit(1762318127.113:1): state=initialized audit_enabled=0 res=1 Nov 5 04:48:50.290895 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 04:48:50.290904 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 04:48:50.290912 kernel: cpuidle: using governor menu Nov 5 04:48:50.290920 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 04:48:50.290928 kernel: dca service started, version 1.12.1 Nov 5 04:48:50.290937 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 5 04:48:50.290945 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 5 04:48:50.290956 kernel: PCI: Using configuration type 1 for base access Nov 5 04:48:50.290964 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 04:48:50.290973 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 04:48:50.290981 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 04:48:50.290989 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 04:48:50.290998 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 04:48:50.291006 kernel: ACPI: Added _OSI(Module Device) Nov 5 04:48:50.291016 kernel: ACPI: Added _OSI(Processor Device) Nov 5 04:48:50.291025 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 04:48:50.291033 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 04:48:50.291041 kernel: ACPI: Interpreter enabled Nov 5 04:48:50.291050 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 04:48:50.291058 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 04:48:50.291066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 04:48:50.291075 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 04:48:50.291085 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 04:48:50.291093 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 04:48:50.291343 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 04:48:50.291528 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 04:48:50.291706 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 04:48:50.291721 kernel: PCI host bridge to bus 0000:00 Nov 5 04:48:50.291923 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 04:48:50.292086 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 04:48:50.292247 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 04:48:50.292416 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 5 04:48:50.292583 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 5 04:48:50.292751 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 5 04:48:50.292940 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 04:48:50.293136 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 04:48:50.293333 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 04:48:50.293509 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 5 04:48:50.293719 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 5 04:48:50.293910 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 5 04:48:50.294083 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 04:48:50.294266 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 04:48:50.294451 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 5 04:48:50.294625 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 5 04:48:50.294824 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 5 04:48:50.295011 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 04:48:50.295188 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 5 04:48:50.295373 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 5 04:48:50.295550 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 5 04:48:50.295738 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 04:48:50.295958 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 5 04:48:50.296141 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 5 04:48:50.296327 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 5 04:48:50.296504 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 5 04:48:50.296691 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 04:48:50.296897 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 04:48:50.297083 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 04:48:50.297258 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 5 04:48:50.297445 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 5 04:48:50.297629 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 04:48:50.297821 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 5 04:48:50.297838 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 04:48:50.297847 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 04:48:50.297856 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 04:48:50.297866 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 04:48:50.297875 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 04:48:50.297884 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 04:48:50.297893 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 04:48:50.297904 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 04:48:50.297913 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 04:48:50.297921 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 04:48:50.297930 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 04:48:50.297939 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 04:48:50.297948 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 04:48:50.297957 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 04:48:50.297968 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 04:48:50.297977 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 04:48:50.297985 kernel: iommu: Default domain type: Translated Nov 5 04:48:50.297994 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 04:48:50.298003 kernel: PCI: Using ACPI for IRQ routing Nov 5 04:48:50.298012 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 04:48:50.298021 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 5 04:48:50.298032 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 5 04:48:50.298210 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 04:48:50.298395 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 04:48:50.298570 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 04:48:50.298581 kernel: vgaarb: loaded Nov 5 04:48:50.298590 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 04:48:50.298598 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 04:48:50.298610 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 04:48:50.298619 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 04:48:50.298627 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 04:48:50.298635 kernel: pnp: PnP ACPI init Nov 5 04:48:50.298843 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 5 04:48:50.298856 kernel: pnp: PnP ACPI: found 6 devices Nov 5 04:48:50.298868 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 04:48:50.298877 kernel: NET: Registered PF_INET protocol family Nov 5 04:48:50.298886 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 04:48:50.298894 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 04:48:50.298903 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 04:48:50.298911 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 04:48:50.298920 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 04:48:50.298931 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 04:48:50.298939 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 04:48:50.298947 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 04:48:50.298956 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 04:48:50.298964 kernel: NET: Registered PF_XDP protocol family Nov 5 04:48:50.299128 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 04:48:50.299289 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 04:48:50.299482 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 04:48:50.299655 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 5 04:48:50.299834 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 5 04:48:50.300006 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 5 04:48:50.300017 kernel: PCI: CLS 0 bytes, default 64 Nov 5 04:48:50.300026 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 04:48:50.300035 kernel: Initialise system trusted keyrings Nov 5 04:48:50.300053 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 04:48:50.300062 kernel: Key type asymmetric registered Nov 5 04:48:50.300070 kernel: Asymmetric key parser 'x509' registered Nov 5 04:48:50.300079 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 04:48:50.300087 kernel: io scheduler mq-deadline registered Nov 5 04:48:50.300096 kernel: io scheduler kyber registered Nov 5 04:48:50.300104 kernel: io scheduler bfq registered Nov 5 04:48:50.300116 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 04:48:50.300125 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 04:48:50.300134 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 04:48:50.300142 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 04:48:50.300151 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 04:48:50.300159 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 04:48:50.300168 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 04:48:50.300178 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 04:48:50.300187 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 04:48:50.300377 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 04:48:50.300390 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 04:48:50.300557 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 04:48:50.300723 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T04:48:48 UTC (1762318128) Nov 5 04:48:50.300912 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 5 04:48:50.300924 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 04:48:50.300932 kernel: NET: Registered PF_INET6 protocol family Nov 5 04:48:50.300941 kernel: Segment Routing with IPv6 Nov 5 04:48:50.300949 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 04:48:50.300958 kernel: NET: Registered PF_PACKET protocol family Nov 5 04:48:50.300966 kernel: Key type dns_resolver registered Nov 5 04:48:50.300974 kernel: IPI shorthand broadcast: enabled Nov 5 04:48:50.300987 kernel: sched_clock: Marking stable (1889002798, 200205048)->(2140726256, -51518410) Nov 5 04:48:50.300996 kernel: registered taskstats version 1 Nov 5 04:48:50.301004 kernel: Loading compiled-in X.509 certificates Nov 5 04:48:50.301013 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: cfd469c5acf75e2b7be33dd554bbf88cbfe73c93' Nov 5 04:48:50.301021 kernel: Demotion targets for Node 0: null Nov 5 04:48:50.301030 kernel: Key type .fscrypt registered Nov 5 04:48:50.301038 kernel: Key type fscrypt-provisioning registered Nov 5 04:48:50.301048 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 04:48:50.301057 kernel: ima: Allocated hash algorithm: sha1 Nov 5 04:48:50.301065 kernel: ima: No architecture policies found Nov 5 04:48:50.301073 kernel: clk: Disabling unused clocks Nov 5 04:48:50.301082 kernel: Freeing unused kernel image (initmem) memory: 15348K Nov 5 04:48:50.301090 kernel: Write protecting the kernel read-only data: 45056k Nov 5 04:48:50.301099 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Nov 5 04:48:50.301109 kernel: Run /init as init process Nov 5 04:48:50.301120 kernel: with arguments: Nov 5 04:48:50.301130 kernel: /init Nov 5 04:48:50.301139 kernel: with environment: Nov 5 04:48:50.301147 kernel: HOME=/ Nov 5 04:48:50.301155 kernel: TERM=linux Nov 5 04:48:50.301163 kernel: SCSI subsystem initialized Nov 5 04:48:50.301174 kernel: libata version 3.00 loaded. Nov 5 04:48:50.301364 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 04:48:50.301397 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 04:48:50.301571 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 04:48:50.301753 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 04:48:50.302007 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 04:48:50.302212 kernel: scsi host0: ahci Nov 5 04:48:50.302415 kernel: scsi host1: ahci Nov 5 04:48:50.302605 kernel: scsi host2: ahci Nov 5 04:48:50.302809 kernel: scsi host3: ahci Nov 5 04:48:50.303004 kernel: scsi host4: ahci Nov 5 04:48:50.303196 kernel: scsi host5: ahci Nov 5 04:48:50.303208 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 26 lpm-pol 1 Nov 5 04:48:50.303217 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 26 lpm-pol 1 Nov 5 04:48:50.303226 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 26 lpm-pol 1 Nov 5 04:48:50.303235 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 26 lpm-pol 1 Nov 5 04:48:50.303244 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 26 lpm-pol 1 Nov 5 04:48:50.303252 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 26 lpm-pol 1 Nov 5 04:48:50.303264 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 04:48:50.303273 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 04:48:50.303282 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 04:48:50.303290 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 04:48:50.303299 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 04:48:50.303327 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 04:48:50.303345 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 04:48:50.303359 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 04:48:50.303367 kernel: ata3.00: applying bridge limits Nov 5 04:48:50.303383 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 04:48:50.303401 kernel: ata3.00: configured for UDMA/100 Nov 5 04:48:50.303623 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 04:48:50.303832 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 04:48:50.304014 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 04:48:50.304026 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 04:48:50.304035 kernel: GPT:16515071 != 27000831 Nov 5 04:48:50.304044 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 04:48:50.304053 kernel: GPT:16515071 != 27000831 Nov 5 04:48:50.304061 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 04:48:50.304070 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 04:48:50.304267 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 04:48:50.304279 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 04:48:50.304478 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 04:48:50.304491 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 04:48:50.304500 kernel: device-mapper: uevent: version 1.0.3 Nov 5 04:48:50.304509 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 04:48:50.304526 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 04:48:50.304537 kernel: raid6: avx2x4 gen() 28352 MB/s Nov 5 04:48:50.304545 kernel: raid6: avx2x2 gen() 28444 MB/s Nov 5 04:48:50.304554 kernel: raid6: avx2x1 gen() 22180 MB/s Nov 5 04:48:50.304563 kernel: raid6: using algorithm avx2x2 gen() 28444 MB/s Nov 5 04:48:50.304577 kernel: raid6: .... xor() 16497 MB/s, rmw enabled Nov 5 04:48:50.304585 kernel: raid6: using avx2x2 recovery algorithm Nov 5 04:48:50.304594 kernel: xor: automatically using best checksumming function avx Nov 5 04:48:50.304603 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 04:48:50.304612 kernel: BTRFS: device fsid 8119ddf0-7fda-4d84-ad78-3566733896c1 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 5 04:48:50.304621 kernel: BTRFS info (device dm-0): first mount of filesystem 8119ddf0-7fda-4d84-ad78-3566733896c1 Nov 5 04:48:50.304630 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:48:50.304644 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 04:48:50.304653 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 04:48:50.304661 kernel: loop: module loaded Nov 5 04:48:50.304670 kernel: loop0: detected capacity change from 0 to 100136 Nov 5 04:48:50.304679 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 04:48:50.304689 systemd[1]: Successfully made /usr/ read-only. Nov 5 04:48:50.304701 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 04:48:50.304713 systemd[1]: Detected virtualization kvm. Nov 5 04:48:50.304722 systemd[1]: Detected architecture x86-64. Nov 5 04:48:50.304731 systemd[1]: Running in initrd. Nov 5 04:48:50.304740 systemd[1]: No hostname configured, using default hostname. Nov 5 04:48:50.304750 systemd[1]: Hostname set to . Nov 5 04:48:50.304759 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 04:48:50.304770 systemd[1]: Queued start job for default target initrd.target. Nov 5 04:48:50.304779 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 04:48:50.304805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:48:50.304814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:48:50.304828 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 04:48:50.304837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 04:48:50.304854 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 04:48:50.304865 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 04:48:50.304876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:48:50.304886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:48:50.304895 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 04:48:50.304905 systemd[1]: Reached target paths.target - Path Units. Nov 5 04:48:50.304916 systemd[1]: Reached target slices.target - Slice Units. Nov 5 04:48:50.304926 systemd[1]: Reached target swap.target - Swaps. Nov 5 04:48:50.304935 systemd[1]: Reached target timers.target - Timer Units. Nov 5 04:48:50.304944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 04:48:50.304953 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 04:48:50.304963 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 04:48:50.304972 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 04:48:50.304984 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:48:50.304993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 04:48:50.305002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:48:50.305011 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 04:48:50.305021 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 04:48:50.305030 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 04:48:50.305049 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 04:48:50.305059 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 04:48:50.305069 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 04:48:50.305078 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 04:48:50.305087 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 04:48:50.305096 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 04:48:50.305106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:48:50.305123 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 04:48:50.305133 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:48:50.305142 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 04:48:50.305179 systemd-journald[315]: Collecting audit messages is disabled. Nov 5 04:48:50.305208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 04:48:50.305218 systemd-journald[315]: Journal started Nov 5 04:48:50.305244 systemd-journald[315]: Runtime Journal (/run/log/journal/1e31b4d156c14a21a27b13eb2d164faf) is 6M, max 48.2M, 42.2M free. Nov 5 04:48:50.320102 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 04:48:50.336807 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 04:48:50.340209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 04:48:50.345363 kernel: Bridge firewalling registered Nov 5 04:48:50.344848 systemd-modules-load[318]: Inserted module 'br_netfilter' Nov 5 04:48:50.345271 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 04:48:50.429061 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 04:48:50.433643 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:48:50.438803 systemd-tmpfiles[332]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 04:48:50.440030 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 04:48:50.444921 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 04:48:50.466423 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 04:48:50.483542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:48:50.495746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:48:50.499772 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:48:50.502313 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 04:48:50.514922 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 04:48:50.517194 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 04:48:50.541056 dracut-cmdline[358]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=9a076e14dca937d9663502c090e1ff4931f585a3752c3aa4c87feb67d6e5a465 Nov 5 04:48:50.577260 systemd-resolved[354]: Positive Trust Anchors: Nov 5 04:48:50.577276 systemd-resolved[354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 04:48:50.577281 systemd-resolved[354]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 04:48:50.577329 systemd-resolved[354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 04:48:50.598872 systemd-resolved[354]: Defaulting to hostname 'linux'. Nov 5 04:48:50.600313 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 04:48:50.611913 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:48:50.746849 kernel: Loading iSCSI transport class v2.0-870. Nov 5 04:48:50.761831 kernel: iscsi: registered transport (tcp) Nov 5 04:48:50.786854 kernel: iscsi: registered transport (qla4xxx) Nov 5 04:48:50.786954 kernel: QLogic iSCSI HBA Driver Nov 5 04:48:50.819327 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 04:48:50.858645 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:48:50.864510 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 04:48:50.939988 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 04:48:50.942283 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 04:48:50.944486 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 04:48:50.979812 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 04:48:50.985374 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:48:51.024984 systemd-udevd[600]: Using default interface naming scheme 'v257'. Nov 5 04:48:51.039726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:48:51.046948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 04:48:51.062449 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 04:48:51.067628 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 04:48:51.102638 dracut-pre-trigger[687]: rd.md=0: removing MD RAID activation Nov 5 04:48:51.131870 systemd-networkd[700]: lo: Link UP Nov 5 04:48:51.131882 systemd-networkd[700]: lo: Gained carrier Nov 5 04:48:51.132506 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 04:48:51.133619 systemd[1]: Reached target network.target - Network. Nov 5 04:48:51.140033 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 04:48:51.143139 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 04:48:51.238659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:48:51.240916 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 04:48:51.293140 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 04:48:51.318629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 04:48:51.331760 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 04:48:51.343799 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 04:48:51.353924 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 04:48:51.358563 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 04:48:51.362840 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 04:48:51.370078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:48:51.370352 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:48:51.371124 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:48:51.371319 systemd-networkd[700]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:48:51.385739 kernel: AES CTR mode by8 optimization enabled Nov 5 04:48:51.371325 systemd-networkd[700]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 04:48:51.372661 systemd-networkd[700]: eth0: Link UP Nov 5 04:48:51.372996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:48:51.373400 systemd-networkd[700]: eth0: Gained carrier Nov 5 04:48:51.373410 systemd-networkd[700]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:48:51.402543 disk-uuid[777]: Primary Header is updated. Nov 5 04:48:51.402543 disk-uuid[777]: Secondary Entries is updated. Nov 5 04:48:51.402543 disk-uuid[777]: Secondary Header is updated. Nov 5 04:48:51.404082 systemd-networkd[700]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 04:48:51.491888 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 04:48:51.502375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:48:51.505485 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 04:48:51.512586 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:48:51.514737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 04:48:51.520015 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 04:48:51.558053 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 04:48:52.470179 disk-uuid[827]: Warning: The kernel is still using the old partition table. Nov 5 04:48:52.470179 disk-uuid[827]: The new table will be used at the next reboot or after you Nov 5 04:48:52.470179 disk-uuid[827]: run partprobe(8) or kpartx(8) Nov 5 04:48:52.470179 disk-uuid[827]: The operation has completed successfully. Nov 5 04:48:52.495016 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 04:48:52.495211 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 04:48:52.501347 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 04:48:52.542832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Nov 5 04:48:52.542988 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:48:52.566470 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:48:52.570569 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:48:52.570594 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:48:52.578813 kernel: BTRFS info (device vda6): last unmount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:48:52.580303 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 04:48:52.585185 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 04:48:52.857507 ignition[888]: Ignition 2.22.0 Nov 5 04:48:52.857568 ignition[888]: Stage: fetch-offline Nov 5 04:48:52.857634 ignition[888]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:48:52.857648 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:48:52.858774 ignition[888]: parsed url from cmdline: "" Nov 5 04:48:52.858780 ignition[888]: no config URL provided Nov 5 04:48:52.858815 ignition[888]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 04:48:52.858831 ignition[888]: no config at "/usr/lib/ignition/user.ign" Nov 5 04:48:52.858902 ignition[888]: op(1): [started] loading QEMU firmware config module Nov 5 04:48:52.858908 ignition[888]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 04:48:52.870296 ignition[888]: op(1): [finished] loading QEMU firmware config module Nov 5 04:48:52.951757 ignition[888]: parsing config with SHA512: c846550b173397e18b01cdb04b1736c7dcff1fe1e51ff18c015d575510f834fd417436383597c9e641588c2f79816602290142717bfc1a5c85f28bab6dc8aec8 Nov 5 04:48:52.956134 unknown[888]: fetched base config from "system" Nov 5 04:48:52.956146 unknown[888]: fetched user config from "qemu" Nov 5 04:48:52.956474 ignition[888]: fetch-offline: fetch-offline passed Nov 5 04:48:52.959697 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 04:48:52.956538 ignition[888]: Ignition finished successfully Nov 5 04:48:52.962535 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 04:48:52.963892 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 04:48:53.057089 systemd-networkd[700]: eth0: Gained IPv6LL Nov 5 04:48:53.071670 ignition[898]: Ignition 2.22.0 Nov 5 04:48:53.071688 ignition[898]: Stage: kargs Nov 5 04:48:53.071908 ignition[898]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:48:53.071923 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:48:53.073457 ignition[898]: kargs: kargs passed Nov 5 04:48:53.077985 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 04:48:53.073533 ignition[898]: Ignition finished successfully Nov 5 04:48:53.081169 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 04:48:53.120693 ignition[906]: Ignition 2.22.0 Nov 5 04:48:53.120707 ignition[906]: Stage: disks Nov 5 04:48:53.120884 ignition[906]: no configs at "/usr/lib/ignition/base.d" Nov 5 04:48:53.120896 ignition[906]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:48:53.124703 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 04:48:53.121805 ignition[906]: disks: disks passed Nov 5 04:48:53.128141 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 04:48:53.121858 ignition[906]: Ignition finished successfully Nov 5 04:48:53.131382 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 04:48:53.134469 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 04:48:53.137864 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 04:48:53.140709 systemd[1]: Reached target basic.target - Basic System. Nov 5 04:48:53.145529 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 04:48:53.205064 systemd-fsck[916]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 04:48:53.302261 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 04:48:53.304426 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 04:48:53.414820 kernel: EXT4-fs (vda9): mounted filesystem d6ba737d-b2ad-4de6-9309-ffb105e40987 r/w with ordered data mode. Quota mode: none. Nov 5 04:48:53.414887 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 04:48:53.416216 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 04:48:53.419733 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 04:48:53.422538 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 04:48:53.425420 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 04:48:53.425458 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 04:48:53.442751 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (924) Nov 5 04:48:53.442802 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:48:53.442815 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:48:53.425483 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 04:48:53.447138 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:48:53.448727 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:48:53.432603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 04:48:53.443690 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 04:48:53.449982 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 04:48:53.525039 initrd-setup-root[948]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 04:48:53.529832 initrd-setup-root[955]: cut: /sysroot/etc/group: No such file or directory Nov 5 04:48:53.536542 initrd-setup-root[962]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 04:48:53.542457 initrd-setup-root[969]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 04:48:53.648507 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 04:48:53.651912 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 04:48:53.654557 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 04:48:53.679362 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 04:48:53.682025 kernel: BTRFS info (device vda6): last unmount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:48:53.697981 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 04:48:53.727229 ignition[1037]: INFO : Ignition 2.22.0 Nov 5 04:48:53.727229 ignition[1037]: INFO : Stage: mount Nov 5 04:48:53.730080 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:48:53.730080 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:48:53.730080 ignition[1037]: INFO : mount: mount passed Nov 5 04:48:53.730080 ignition[1037]: INFO : Ignition finished successfully Nov 5 04:48:53.731618 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 04:48:53.735187 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 04:48:53.767610 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 04:48:53.796140 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1050) Nov 5 04:48:53.796173 kernel: BTRFS info (device vda6): first mount of filesystem e7137982-ac37-41c2-8fd6-d0cf0728ebd4 Nov 5 04:48:53.796223 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 04:48:53.801247 kernel: BTRFS info (device vda6): turning on async discard Nov 5 04:48:53.801276 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 04:48:53.803179 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 04:48:53.879041 ignition[1067]: INFO : Ignition 2.22.0 Nov 5 04:48:53.879041 ignition[1067]: INFO : Stage: files Nov 5 04:48:53.881748 ignition[1067]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:48:53.881748 ignition[1067]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:48:53.885729 ignition[1067]: DEBUG : files: compiled without relabeling support, skipping Nov 5 04:48:53.888073 ignition[1067]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 04:48:53.888073 ignition[1067]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 04:48:53.896234 ignition[1067]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 04:48:53.898612 ignition[1067]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 04:48:53.901289 unknown[1067]: wrote ssh authorized keys file for user: core Nov 5 04:48:53.903394 ignition[1067]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 04:48:53.905850 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 04:48:53.909039 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 04:48:53.954735 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 04:48:54.111235 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 04:48:54.114472 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 04:48:54.117433 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 04:48:54.193991 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 04:48:54.197137 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 04:48:54.197137 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 04:48:54.338818 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 04:48:54.338818 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 04:48:54.346263 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 5 04:48:54.858779 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 04:48:55.632244 ignition[1067]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 04:48:55.632244 ignition[1067]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 04:48:55.638230 ignition[1067]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 04:48:55.641482 ignition[1067]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 04:48:55.666277 ignition[1067]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 04:48:55.676081 ignition[1067]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 04:48:55.678669 ignition[1067]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 04:48:55.678669 ignition[1067]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 04:48:55.678669 ignition[1067]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 04:48:55.678669 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 04:48:55.678669 ignition[1067]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 04:48:55.678669 ignition[1067]: INFO : files: files passed Nov 5 04:48:55.678669 ignition[1067]: INFO : Ignition finished successfully Nov 5 04:48:55.685025 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 04:48:55.687689 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 04:48:55.694083 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 04:48:55.709283 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 04:48:55.709411 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 04:48:55.714284 initrd-setup-root-after-ignition[1098]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 04:48:55.716725 initrd-setup-root-after-ignition[1104]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:48:55.719481 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:48:55.719481 initrd-setup-root-after-ignition[1100]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 04:48:55.726990 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 04:48:55.728249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 04:48:55.730106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 04:48:55.796640 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 04:48:55.798419 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 04:48:55.803273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 04:48:55.804297 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 04:48:55.810380 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 04:48:55.811941 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 04:48:55.851619 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 04:48:55.857063 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 04:48:55.886988 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 04:48:55.887185 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:48:55.888570 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:48:55.897179 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 04:48:55.898341 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 04:48:55.898497 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 04:48:55.904316 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 04:48:55.905498 systemd[1]: Stopped target basic.target - Basic System. Nov 5 04:48:55.913196 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 04:48:55.914311 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 04:48:55.914865 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 04:48:55.923952 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 04:48:55.924807 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 04:48:55.925341 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 04:48:55.925916 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 04:48:55.934975 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 04:48:55.938349 systemd[1]: Stopped target swap.target - Swaps. Nov 5 04:48:55.941386 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 04:48:55.941597 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 04:48:55.944420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:48:55.945303 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:48:55.951331 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 04:48:55.951497 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:48:55.955228 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 04:48:55.955384 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 04:48:55.961605 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 04:48:55.961809 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 04:48:55.962618 systemd[1]: Stopped target paths.target - Path Units. Nov 5 04:48:55.963336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 04:48:55.972909 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:48:55.973645 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 04:48:55.974391 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 04:48:55.981384 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 04:48:55.981500 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 04:48:55.984439 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 04:48:55.984528 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 04:48:55.986939 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 04:48:55.987089 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 04:48:55.989880 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 04:48:55.989994 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 04:48:55.997490 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 04:48:55.999189 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 04:48:56.007644 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 04:48:56.009483 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:48:56.013612 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 04:48:56.013831 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:48:56.014695 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 04:48:56.014851 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 04:48:56.027387 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 04:48:56.027512 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 04:48:56.053589 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 04:48:56.085428 ignition[1125]: INFO : Ignition 2.22.0 Nov 5 04:48:56.085428 ignition[1125]: INFO : Stage: umount Nov 5 04:48:56.088272 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 04:48:56.088272 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 04:48:56.088272 ignition[1125]: INFO : umount: umount passed Nov 5 04:48:56.088272 ignition[1125]: INFO : Ignition finished successfully Nov 5 04:48:56.096052 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 04:48:56.096252 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 04:48:56.101830 systemd[1]: Stopped target network.target - Network. Nov 5 04:48:56.102611 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 04:48:56.102691 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 04:48:56.105425 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 04:48:56.105486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 04:48:56.108830 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 04:48:56.108890 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 04:48:56.109364 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 04:48:56.109411 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 04:48:56.118533 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 04:48:56.121549 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 04:48:56.136050 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 04:48:56.136223 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 04:48:56.142253 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 04:48:56.142375 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 04:48:56.144174 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 04:48:56.144284 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 04:48:56.151457 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 04:48:56.151598 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 04:48:56.158051 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 04:48:56.158751 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 04:48:56.158815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:48:56.167056 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 04:48:56.167806 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 04:48:56.167874 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 04:48:56.170814 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 04:48:56.170882 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:48:56.171334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 04:48:56.171401 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 04:48:56.172215 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:48:56.200959 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 04:48:56.201175 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:48:56.203666 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 04:48:56.203717 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 04:48:56.207388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 04:48:56.207436 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:48:56.210576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 04:48:56.210636 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 04:48:56.216626 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 04:48:56.216685 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 04:48:56.221188 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 04:48:56.221243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 04:48:56.227077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 04:48:56.227485 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 04:48:56.227539 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:48:56.231656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 04:48:56.231715 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:48:56.237078 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 04:48:56.237135 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 04:48:56.238340 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 04:48:56.238388 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:48:56.243268 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 04:48:56.243324 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:48:56.261018 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 04:48:56.261163 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 04:48:56.270955 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 04:48:56.271097 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 04:48:56.272481 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 04:48:56.277320 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 04:48:56.312601 systemd[1]: Switching root. Nov 5 04:48:56.357629 systemd-journald[315]: Journal stopped Nov 5 04:48:57.629141 systemd-journald[315]: Received SIGTERM from PID 1 (systemd). Nov 5 04:48:57.629214 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 04:48:57.629230 kernel: SELinux: policy capability open_perms=1 Nov 5 04:48:57.629251 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 04:48:57.629265 kernel: SELinux: policy capability always_check_network=0 Nov 5 04:48:57.629328 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 04:48:57.629343 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 04:48:57.629356 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 04:48:57.629372 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 04:48:57.629390 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 04:48:57.629402 kernel: audit: type=1403 audit(1762318136.695:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 04:48:57.629420 systemd[1]: Successfully loaded SELinux policy in 74.420ms. Nov 5 04:48:57.629444 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.073ms. Nov 5 04:48:57.629460 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 04:48:57.629479 systemd[1]: Detected virtualization kvm. Nov 5 04:48:57.629495 systemd[1]: Detected architecture x86-64. Nov 5 04:48:57.629521 systemd[1]: Detected first boot. Nov 5 04:48:57.629536 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 04:48:57.629549 zram_generator::config[1172]: No configuration found. Nov 5 04:48:57.629577 kernel: Guest personality initialized and is inactive Nov 5 04:48:57.629706 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 04:48:57.629724 kernel: Initialized host personality Nov 5 04:48:57.629737 kernel: NET: Registered PF_VSOCK protocol family Nov 5 04:48:57.629750 systemd[1]: Populated /etc with preset unit settings. Nov 5 04:48:57.629764 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 04:48:57.629777 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 04:48:57.629817 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 04:48:57.629833 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 04:48:57.629846 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 04:48:57.629859 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 04:48:57.629872 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 04:48:57.629884 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 04:48:57.629906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 04:48:57.629919 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 04:48:57.629932 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 04:48:57.629945 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 04:48:57.629958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 04:48:57.629971 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 04:48:57.630335 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 04:48:57.630361 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 04:48:57.630375 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 04:48:57.630389 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 04:48:57.630401 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 04:48:57.630415 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 04:48:57.630429 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 04:48:57.630450 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 04:48:57.630464 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 04:48:57.630477 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 04:48:57.630490 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 04:48:57.630503 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 04:48:57.630515 systemd[1]: Reached target slices.target - Slice Units. Nov 5 04:48:57.630528 systemd[1]: Reached target swap.target - Swaps. Nov 5 04:48:57.630540 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 04:48:57.630561 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 04:48:57.630575 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 04:48:57.630588 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 04:48:57.630601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 04:48:57.630614 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 04:48:57.630628 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 04:48:57.630640 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 04:48:57.630663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 04:48:57.630676 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 04:48:57.630689 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:57.630702 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 04:48:57.630715 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 04:48:57.630728 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 04:48:57.630741 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 04:48:57.630762 systemd[1]: Reached target machines.target - Containers. Nov 5 04:48:57.630779 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 04:48:57.630805 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:48:57.630819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 04:48:57.630831 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 04:48:57.630844 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:48:57.630865 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 04:48:57.630879 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:48:57.630892 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 04:48:57.630904 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:48:57.630918 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 04:48:57.630933 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 04:48:57.630946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 04:48:57.630966 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 04:48:57.630981 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 04:48:57.630994 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:48:57.631015 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 04:48:57.631028 kernel: ACPI: bus type drm_connector registered Nov 5 04:48:57.631041 kernel: fuse: init (API version 7.41) Nov 5 04:48:57.631053 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 04:48:57.631073 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 04:48:57.631086 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 04:48:57.631108 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 04:48:57.631122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 04:48:57.631143 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:57.631178 systemd-journald[1257]: Collecting audit messages is disabled. Nov 5 04:48:57.631206 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 04:48:57.631219 systemd-journald[1257]: Journal started Nov 5 04:48:57.631241 systemd-journald[1257]: Runtime Journal (/run/log/journal/1e31b4d156c14a21a27b13eb2d164faf) is 6M, max 48.2M, 42.2M free. Nov 5 04:48:57.632855 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 04:48:57.282729 systemd[1]: Queued start job for default target multi-user.target. Nov 5 04:48:57.307960 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 04:48:57.308534 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 04:48:57.639040 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 04:48:57.640316 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 04:48:57.642060 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 04:48:57.643935 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 04:48:57.645852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 04:48:57.647740 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 04:48:57.650129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 04:48:57.652426 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 04:48:57.652651 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 04:48:57.654860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:48:57.655078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:48:57.657212 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 04:48:57.657436 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 04:48:57.659431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:48:57.659659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:48:57.661916 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 04:48:57.662145 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 04:48:57.664183 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:48:57.664398 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:48:57.666537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 04:48:57.668815 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 04:48:57.671943 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 04:48:57.674401 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 04:48:57.695136 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 04:48:57.697649 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 04:48:57.701052 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 04:48:57.703959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 04:48:57.705988 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 04:48:57.706018 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 04:48:57.708683 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 04:48:57.711002 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:48:57.717545 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 04:48:57.720497 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 04:48:57.721180 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:48:57.722657 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 04:48:57.724777 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:48:57.726215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 04:48:57.729964 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 04:48:57.733078 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 04:48:57.735609 systemd-journald[1257]: Time spent on flushing to /var/log/journal/1e31b4d156c14a21a27b13eb2d164faf is 36.397ms for 966 entries. Nov 5 04:48:57.735609 systemd-journald[1257]: System Journal (/var/log/journal/1e31b4d156c14a21a27b13eb2d164faf) is 8M, max 163.5M, 155.5M free. Nov 5 04:48:57.790925 systemd-journald[1257]: Received client request to flush runtime journal. Nov 5 04:48:57.790991 kernel: loop1: detected capacity change from 0 to 111544 Nov 5 04:48:57.738134 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 04:48:57.742499 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 04:48:57.746224 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 04:48:57.749186 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 04:48:57.754455 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 04:48:57.774378 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Nov 5 04:48:57.774394 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Nov 5 04:48:57.779286 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 04:48:57.782021 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 04:48:57.785565 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 04:48:57.793073 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 04:48:57.795310 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 04:48:57.806826 kernel: loop2: detected capacity change from 0 to 119080 Nov 5 04:48:57.813432 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 04:48:57.834162 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 04:48:57.838209 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 04:48:57.842223 kernel: loop3: detected capacity change from 0 to 219144 Nov 5 04:48:57.841836 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 04:48:57.857664 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 04:48:57.870814 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Nov 5 04:48:57.870832 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Nov 5 04:48:57.871867 kernel: loop4: detected capacity change from 0 to 111544 Nov 5 04:48:57.878707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 04:48:57.882823 kernel: loop5: detected capacity change from 0 to 119080 Nov 5 04:48:57.892823 kernel: loop6: detected capacity change from 0 to 219144 Nov 5 04:48:57.898852 (sd-merge)[1316]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 04:48:57.912361 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 04:48:57.932808 (sd-merge)[1316]: Merged extensions into '/usr'. Nov 5 04:48:58.033281 systemd[1]: Reload requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 04:48:58.033444 systemd[1]: Reloading... Nov 5 04:48:58.093910 zram_generator::config[1347]: No configuration found. Nov 5 04:48:58.150318 systemd-resolved[1311]: Positive Trust Anchors: Nov 5 04:48:58.150336 systemd-resolved[1311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 04:48:58.150342 systemd-resolved[1311]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 04:48:58.150373 systemd-resolved[1311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 04:48:58.155611 systemd-resolved[1311]: Defaulting to hostname 'linux'. Nov 5 04:48:58.331910 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 04:48:58.332432 systemd[1]: Reloading finished in 298 ms. Nov 5 04:48:58.371334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 04:48:58.373774 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 04:48:58.378995 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 04:48:58.399906 systemd[1]: Starting ensure-sysext.service... Nov 5 04:48:58.402650 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 04:48:58.424304 systemd[1]: Reload requested from client PID 1387 ('systemctl') (unit ensure-sysext.service)... Nov 5 04:48:58.424326 systemd[1]: Reloading... Nov 5 04:48:58.430626 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 04:48:58.430914 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 04:48:58.431229 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 04:48:58.431502 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 04:48:58.432443 systemd-tmpfiles[1388]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 04:48:58.432713 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 5 04:48:58.432783 systemd-tmpfiles[1388]: ACLs are not supported, ignoring. Nov 5 04:48:58.439640 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 04:48:58.439727 systemd-tmpfiles[1388]: Skipping /boot Nov 5 04:48:58.450593 systemd-tmpfiles[1388]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 04:48:58.450669 systemd-tmpfiles[1388]: Skipping /boot Nov 5 04:48:58.479818 zram_generator::config[1420]: No configuration found. Nov 5 04:48:58.671761 systemd[1]: Reloading finished in 247 ms. Nov 5 04:48:58.696271 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 04:48:58.721389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 04:48:58.733657 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:48:58.736696 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 04:48:58.748092 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 04:48:58.752103 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 04:48:58.757997 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 04:48:58.762091 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 04:48:58.770441 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:58.770618 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:48:58.776076 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:48:58.779952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:48:58.788281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:48:58.790422 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:48:58.790526 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:48:58.790620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:58.793151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:48:58.797515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:48:58.800223 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:48:58.800441 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:48:58.803276 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:48:58.803499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:48:58.811307 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 04:48:58.819194 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:48:58.819504 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:48:58.822887 systemd-udevd[1467]: Using default interface naming scheme 'v257'. Nov 5 04:48:58.823460 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:58.823678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:48:58.827029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:48:58.831343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:48:58.834031 augenrules[1490]: No rules Nov 5 04:48:58.839991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:48:58.843697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:48:58.843839 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:48:58.843947 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:58.847046 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:48:58.847346 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:48:58.849863 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 04:48:58.853150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:48:58.853372 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:48:58.855942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:48:58.856172 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:48:58.859438 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:48:58.859688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:48:58.865452 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 04:48:58.871803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 04:48:58.886051 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:58.887484 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:48:58.889423 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 04:48:58.891994 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 04:48:58.900233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 04:48:58.902939 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 04:48:58.908907 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 04:48:58.910883 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 04:48:58.910923 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 04:48:58.924990 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 04:48:58.926736 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 04:48:58.926775 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 04:48:58.930359 systemd[1]: Finished ensure-sysext.service. Nov 5 04:48:58.932312 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 04:48:58.932540 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 04:48:58.943681 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 04:48:58.943957 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 04:48:58.946144 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 04:48:58.946454 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 04:48:58.948853 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 04:48:58.949455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 04:48:58.962615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 04:48:58.962712 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 04:48:58.965825 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 04:48:58.987468 augenrules[1514]: /sbin/augenrules: No change Nov 5 04:48:58.999557 augenrules[1552]: No rules Nov 5 04:48:59.010922 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:48:59.011247 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:48:59.027679 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 04:48:59.087392 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 04:48:59.092084 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 04:48:59.098811 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 04:48:59.102807 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 04:48:59.104820 kernel: ACPI: button: Power Button [PWRF] Nov 5 04:48:59.129151 systemd-networkd[1534]: lo: Link UP Nov 5 04:48:59.129163 systemd-networkd[1534]: lo: Gained carrier Nov 5 04:48:59.131253 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 04:48:59.132118 systemd-networkd[1534]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:48:59.132131 systemd-networkd[1534]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 04:48:59.132506 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 04:48:59.133499 systemd[1]: Reached target network.target - Network. Nov 5 04:48:59.133825 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 04:48:59.134274 systemd-networkd[1534]: eth0: Link UP Nov 5 04:48:59.134525 systemd-networkd[1534]: eth0: Gained carrier Nov 5 04:48:59.134546 systemd-networkd[1534]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 04:48:59.136920 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 04:48:59.140434 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 04:48:59.153032 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 04:48:59.155969 systemd-networkd[1534]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 04:48:59.156886 systemd-timesyncd[1542]: Network configuration changed, trying to establish connection. Nov 5 04:48:59.767197 systemd-resolved[1311]: Clock change detected. Flushing caches. Nov 5 04:48:59.767278 systemd-timesyncd[1542]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 04:48:59.767321 systemd-timesyncd[1542]: Initial clock synchronization to Wed 2025-11-05 04:48:59.767080 UTC. Nov 5 04:48:59.801484 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 04:48:59.914829 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 04:48:59.947132 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 04:48:59.947540 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 04:48:59.972176 kernel: kvm_amd: TSC scaling supported Nov 5 04:48:59.972236 kernel: kvm_amd: Nested Virtualization enabled Nov 5 04:48:59.972368 kernel: kvm_amd: Nested Paging enabled Nov 5 04:48:59.974791 kernel: kvm_amd: LBR virtualization supported Nov 5 04:48:59.974821 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 04:48:59.974835 kernel: kvm_amd: Virtual GIF supported Nov 5 04:49:00.034775 kernel: EDAC MC: Ver: 3.0.0 Nov 5 04:49:00.063493 ldconfig[1459]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 04:49:00.069858 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 04:49:00.072249 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 04:49:00.115505 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 04:49:00.175155 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 04:49:00.179871 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 04:49:00.181780 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 04:49:00.183978 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 04:49:00.186151 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 04:49:00.188403 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 04:49:00.190347 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 04:49:00.192507 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 04:49:00.194632 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 04:49:00.194671 systemd[1]: Reached target paths.target - Path Units. Nov 5 04:49:00.196229 systemd[1]: Reached target timers.target - Timer Units. Nov 5 04:49:00.198979 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 04:49:00.202687 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 04:49:00.206614 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 04:49:00.208942 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 04:49:00.211111 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 04:49:00.215667 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 04:49:00.217837 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 04:49:00.220529 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 04:49:00.223107 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 04:49:00.224771 systemd[1]: Reached target basic.target - Basic System. Nov 5 04:49:00.226424 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 04:49:00.226456 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 04:49:00.227645 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 04:49:00.230462 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 04:49:00.243144 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 04:49:00.246841 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 04:49:00.249834 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 04:49:00.250618 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 04:49:00.252855 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 04:49:00.257797 jq[1607]: false Nov 5 04:49:00.257414 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 04:49:00.262793 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 04:49:00.264577 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 04:49:00.267775 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Refreshing passwd entry cache Nov 5 04:49:00.267454 oslogin_cache_refresh[1609]: Refreshing passwd entry cache Nov 5 04:49:00.273807 extend-filesystems[1608]: Found /dev/vda6 Nov 5 04:49:00.278212 extend-filesystems[1608]: Found /dev/vda9 Nov 5 04:49:00.281184 extend-filesystems[1608]: Checking size of /dev/vda9 Nov 5 04:49:00.283254 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Failure getting users, quitting Nov 5 04:49:00.283254 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 04:49:00.283229 oslogin_cache_refresh[1609]: Failure getting users, quitting Nov 5 04:49:00.283692 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Refreshing group entry cache Nov 5 04:49:00.283249 oslogin_cache_refresh[1609]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 04:49:00.283310 oslogin_cache_refresh[1609]: Refreshing group entry cache Nov 5 04:49:00.286980 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 04:49:00.294725 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Failure getting groups, quitting Nov 5 04:49:00.294725 google_oslogin_nss_cache[1609]: oslogin_cache_refresh[1609]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 04:49:00.294885 extend-filesystems[1608]: Resized partition /dev/vda9 Nov 5 04:49:00.294439 oslogin_cache_refresh[1609]: Failure getting groups, quitting Nov 5 04:49:00.297063 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 04:49:00.294455 oslogin_cache_refresh[1609]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 04:49:00.298837 extend-filesystems[1626]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 04:49:00.301397 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 04:49:00.302142 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 04:49:00.303699 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 04:49:00.306756 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 04:49:00.317957 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 04:49:00.323541 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 04:49:00.336262 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 04:49:00.329200 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 04:49:00.351998 jq[1631]: true Nov 5 04:49:00.329540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 04:49:00.330003 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 04:49:00.330334 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 04:49:00.333239 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 04:49:00.333597 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 04:49:00.338056 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 04:49:00.338407 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 04:49:00.354059 extend-filesystems[1626]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 04:49:00.354059 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 04:49:00.354059 extend-filesystems[1626]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 04:49:00.362225 extend-filesystems[1608]: Resized filesystem in /dev/vda9 Nov 5 04:49:00.364819 jq[1643]: true Nov 5 04:49:00.355567 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 04:49:00.356029 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 04:49:00.369765 update_engine[1630]: I20251105 04:49:00.369408 1630 main.cc:92] Flatcar Update Engine starting Nov 5 04:49:00.382879 tar[1639]: linux-amd64/LICENSE Nov 5 04:49:00.383173 tar[1639]: linux-amd64/helm Nov 5 04:49:00.422268 dbus-daemon[1605]: [system] SELinux support is enabled Nov 5 04:49:00.422519 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 04:49:00.430839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 04:49:00.430876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 04:49:00.433052 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 04:49:00.433078 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 04:49:00.433888 update_engine[1630]: I20251105 04:49:00.433828 1630 update_check_scheduler.cc:74] Next update check in 10m2s Nov 5 04:49:00.435121 systemd-logind[1625]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 04:49:00.435150 systemd-logind[1625]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 04:49:00.435330 systemd[1]: Started update-engine.service - Update Engine. Nov 5 04:49:00.435831 systemd-logind[1625]: New seat seat0. Nov 5 04:49:00.437586 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 04:49:00.442074 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 04:49:00.475667 sshd_keygen[1647]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 04:49:00.484102 bash[1675]: Updated "/home/core/.ssh/authorized_keys" Nov 5 04:49:00.520136 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 04:49:00.523590 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 04:49:00.529912 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 04:49:00.532022 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 04:49:00.554316 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 04:49:00.554596 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 04:49:00.560260 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 04:49:00.570435 locksmithd[1676]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 04:49:00.583630 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 04:49:00.591008 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 04:49:00.595645 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 04:49:00.597632 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 04:49:00.760034 containerd[1644]: time="2025-11-05T04:49:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 04:49:00.760864 containerd[1644]: time="2025-11-05T04:49:00.760834287Z" level=info msg="starting containerd" revision=75cb2b7193e4e490e9fbdc236c0e811ccaba3376 version=v2.1.4 Nov 5 04:49:00.774615 containerd[1644]: time="2025-11-05T04:49:00.774575808Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.355µs" Nov 5 04:49:00.774615 containerd[1644]: time="2025-11-05T04:49:00.774604812Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 04:49:00.774688 containerd[1644]: time="2025-11-05T04:49:00.774652391Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 04:49:00.774688 containerd[1644]: time="2025-11-05T04:49:00.774664394Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 04:49:00.774907 containerd[1644]: time="2025-11-05T04:49:00.774883144Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 04:49:00.774907 containerd[1644]: time="2025-11-05T04:49:00.774902340Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775001 containerd[1644]: time="2025-11-05T04:49:00.774970488Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775001 containerd[1644]: time="2025-11-05T04:49:00.774992850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775241 containerd[1644]: time="2025-11-05T04:49:00.775215888Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775241 containerd[1644]: time="2025-11-05T04:49:00.775233792Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775281 containerd[1644]: time="2025-11-05T04:49:00.775243851Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775281 containerd[1644]: time="2025-11-05T04:49:00.775251815Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775453 containerd[1644]: time="2025-11-05T04:49:00.775430360Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775453 containerd[1644]: time="2025-11-05T04:49:00.775445699Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775569 containerd[1644]: time="2025-11-05T04:49:00.775555074Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775864 containerd[1644]: time="2025-11-05T04:49:00.775846381Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775893 containerd[1644]: time="2025-11-05T04:49:00.775880555Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 04:49:00.775913 containerd[1644]: time="2025-11-05T04:49:00.775891796Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 04:49:00.775932 containerd[1644]: time="2025-11-05T04:49:00.775925599Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 04:49:00.776149 containerd[1644]: time="2025-11-05T04:49:00.776126786Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 04:49:00.776219 containerd[1644]: time="2025-11-05T04:49:00.776205384Z" level=info msg="metadata content store policy set" policy=shared Nov 5 04:49:00.782185 containerd[1644]: time="2025-11-05T04:49:00.782154460Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 04:49:00.782224 containerd[1644]: time="2025-11-05T04:49:00.782194826Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 5 04:49:00.782339 containerd[1644]: time="2025-11-05T04:49:00.782285105Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Nov 5 04:49:00.782339 containerd[1644]: time="2025-11-05T04:49:00.782301025Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 04:49:00.782339 containerd[1644]: time="2025-11-05T04:49:00.782337764Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 04:49:00.782400 containerd[1644]: time="2025-11-05T04:49:00.782349526Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 04:49:00.782400 containerd[1644]: time="2025-11-05T04:49:00.782358784Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 04:49:00.782400 containerd[1644]: time="2025-11-05T04:49:00.782391325Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 04:49:00.782463 containerd[1644]: time="2025-11-05T04:49:00.782403047Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 04:49:00.782463 containerd[1644]: time="2025-11-05T04:49:00.782424928Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 04:49:00.782463 containerd[1644]: time="2025-11-05T04:49:00.782434105Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 04:49:00.782463 containerd[1644]: time="2025-11-05T04:49:00.782443623Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 04:49:00.782463 containerd[1644]: time="2025-11-05T04:49:00.782452519Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 04:49:00.782463 containerd[1644]: time="2025-11-05T04:49:00.782462528Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 04:49:00.782667 containerd[1644]: time="2025-11-05T04:49:00.782609544Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 04:49:00.782667 containerd[1644]: time="2025-11-05T04:49:00.782632026Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 04:49:00.782667 containerd[1644]: time="2025-11-05T04:49:00.782652444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782669146Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782679034Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782688853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782712567Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782768121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782780354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782789732Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782799741Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782820580Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782881624Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 04:49:00.782927 containerd[1644]: time="2025-11-05T04:49:00.782911099Z" level=info msg="Start snapshots syncer" Nov 5 04:49:00.783130 containerd[1644]: time="2025-11-05T04:49:00.782944272Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 04:49:00.783294 containerd[1644]: time="2025-11-05T04:49:00.783233274Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 04:49:00.783448 containerd[1644]: time="2025-11-05T04:49:00.783311721Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 04:49:00.783448 containerd[1644]: time="2025-11-05T04:49:00.783390659Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783497750Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783518889Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783528317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783537975Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783547994Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783558624Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783568552Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783577279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783609750Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783637892Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783649574Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783656958Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783665244Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 04:49:00.784182 containerd[1644]: time="2025-11-05T04:49:00.783673970Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.783682987Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.783692545Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.783731047Z" level=info msg="runtime interface created" Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.783755172Z" level=info msg="created NRI interface" Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.784164229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.784175791Z" level=info msg="Connect containerd service" Nov 5 04:49:00.784445 containerd[1644]: time="2025-11-05T04:49:00.784197732Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 04:49:00.785588 containerd[1644]: time="2025-11-05T04:49:00.785520092Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 04:49:00.864108 tar[1639]: linux-amd64/README.md Nov 5 04:49:00.891614 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 04:49:00.922022 containerd[1644]: time="2025-11-05T04:49:00.921976777Z" level=info msg="Start subscribing containerd event" Nov 5 04:49:00.922107 containerd[1644]: time="2025-11-05T04:49:00.922059773Z" level=info msg="Start recovering state" Nov 5 04:49:00.922210 containerd[1644]: time="2025-11-05T04:49:00.922188484Z" level=info msg="Start event monitor" Nov 5 04:49:00.922233 containerd[1644]: time="2025-11-05T04:49:00.922212910Z" level=info msg="Start cni network conf syncer for default" Nov 5 04:49:00.922233 containerd[1644]: time="2025-11-05T04:49:00.922223430Z" level=info msg="Start streaming server" Nov 5 04:49:00.922268 containerd[1644]: time="2025-11-05T04:49:00.922236604Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 04:49:00.922268 containerd[1644]: time="2025-11-05T04:49:00.922245301Z" level=info msg="runtime interface starting up..." Nov 5 04:49:00.922268 containerd[1644]: time="2025-11-05T04:49:00.922251442Z" level=info msg="starting plugins..." Nov 5 04:49:00.922268 containerd[1644]: time="2025-11-05T04:49:00.922264437Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 04:49:00.922602 containerd[1644]: time="2025-11-05T04:49:00.922550383Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 04:49:00.922646 containerd[1644]: time="2025-11-05T04:49:00.922629181Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 04:49:00.922720 containerd[1644]: time="2025-11-05T04:49:00.922702729Z" level=info msg="containerd successfully booted in 0.163488s" Nov 5 04:49:00.922830 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 04:49:01.581113 systemd-networkd[1534]: eth0: Gained IPv6LL Nov 5 04:49:01.584398 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 04:49:01.587031 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 04:49:01.590297 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 04:49:01.593353 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:01.596207 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 04:49:01.625192 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 04:49:01.627452 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 04:49:01.627727 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 04:49:01.631123 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 04:49:02.098680 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 04:49:02.102007 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Nov 5 04:49:02.197854 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:02.199956 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:02.208522 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 04:49:02.211806 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 04:49:02.220711 systemd-logind[1625]: New session 1 of user core. Nov 5 04:49:02.247013 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 04:49:02.252356 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 04:49:02.277564 (systemd)[1747]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 04:49:02.280081 systemd-logind[1625]: New session c1 of user core. Nov 5 04:49:02.421320 systemd[1747]: Queued start job for default target default.target. Nov 5 04:49:02.439436 systemd[1747]: Created slice app.slice - User Application Slice. Nov 5 04:49:02.439466 systemd[1747]: Reached target paths.target - Paths. Nov 5 04:49:02.439514 systemd[1747]: Reached target timers.target - Timers. Nov 5 04:49:02.441305 systemd[1747]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 04:49:02.454199 systemd[1747]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 04:49:02.454339 systemd[1747]: Reached target sockets.target - Sockets. Nov 5 04:49:02.454386 systemd[1747]: Reached target basic.target - Basic System. Nov 5 04:49:02.454429 systemd[1747]: Reached target default.target - Main User Target. Nov 5 04:49:02.454466 systemd[1747]: Startup finished in 167ms. Nov 5 04:49:02.455233 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 04:49:02.466916 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 04:49:02.487676 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:54036.service - OpenSSH per-connection server daemon (10.0.0.1:54036). Nov 5 04:49:02.547788 sshd[1758]: Accepted publickey for core from 10.0.0.1 port 54036 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:02.549136 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:02.553938 systemd-logind[1625]: New session 2 of user core. Nov 5 04:49:03.391174 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 04:49:03.410727 sshd[1761]: Connection closed by 10.0.0.1 port 54036 Nov 5 04:49:03.413862 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:03.422574 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:54036.service: Deactivated successfully. Nov 5 04:49:03.424553 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 04:49:03.425501 systemd-logind[1625]: Session 2 logged out. Waiting for processes to exit. Nov 5 04:49:03.428411 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:54050.service - OpenSSH per-connection server daemon (10.0.0.1:54050). Nov 5 04:49:03.431922 systemd-logind[1625]: Removed session 2. Nov 5 04:49:03.478402 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:03.481101 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 04:49:03.482677 systemd[1]: Startup finished in 3.120s (kernel) + 6.875s (initrd) + 6.249s (userspace) = 16.245s. Nov 5 04:49:03.488226 (kubelet)[1775]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:49:03.496838 sshd[1767]: Accepted publickey for core from 10.0.0.1 port 54050 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:03.498415 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:03.503171 systemd-logind[1625]: New session 3 of user core. Nov 5 04:49:03.512852 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 04:49:03.528098 sshd[1776]: Connection closed by 10.0.0.1 port 54050 Nov 5 04:49:03.528353 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:03.533464 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:54050.service: Deactivated successfully. Nov 5 04:49:03.535813 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 04:49:03.536654 systemd-logind[1625]: Session 3 logged out. Waiting for processes to exit. Nov 5 04:49:03.538153 systemd-logind[1625]: Removed session 3. Nov 5 04:49:04.045055 kubelet[1775]: E1105 04:49:04.044979 1775 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:49:04.049382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:49:04.049601 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:49:04.050165 systemd[1]: kubelet.service: Consumed 2.258s CPU time, 257.4M memory peak. Nov 5 04:49:13.553375 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:55856.service - OpenSSH per-connection server daemon (10.0.0.1:55856). Nov 5 04:49:13.618919 sshd[1793]: Accepted publickey for core from 10.0.0.1 port 55856 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:13.620852 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:13.627170 systemd-logind[1625]: New session 4 of user core. Nov 5 04:49:13.637025 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 04:49:13.652045 sshd[1796]: Connection closed by 10.0.0.1 port 55856 Nov 5 04:49:13.652376 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:13.662423 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:55856.service: Deactivated successfully. Nov 5 04:49:13.664811 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 04:49:13.665803 systemd-logind[1625]: Session 4 logged out. Waiting for processes to exit. Nov 5 04:49:13.669374 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:55860.service - OpenSSH per-connection server daemon (10.0.0.1:55860). Nov 5 04:49:13.670465 systemd-logind[1625]: Removed session 4. Nov 5 04:49:13.736187 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 55860 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:13.738131 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:13.744791 systemd-logind[1625]: New session 5 of user core. Nov 5 04:49:13.756211 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 04:49:13.769517 sshd[1805]: Connection closed by 10.0.0.1 port 55860 Nov 5 04:49:13.769982 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:13.785414 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:55860.service: Deactivated successfully. Nov 5 04:49:13.788203 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 04:49:13.789404 systemd-logind[1625]: Session 5 logged out. Waiting for processes to exit. Nov 5 04:49:13.793215 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:55862.service - OpenSSH per-connection server daemon (10.0.0.1:55862). Nov 5 04:49:13.793903 systemd-logind[1625]: Removed session 5. Nov 5 04:49:13.859656 sshd[1811]: Accepted publickey for core from 10.0.0.1 port 55862 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:13.862445 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:13.867270 systemd-logind[1625]: New session 6 of user core. Nov 5 04:49:13.874890 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 04:49:13.889960 sshd[1814]: Connection closed by 10.0.0.1 port 55862 Nov 5 04:49:13.890297 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:13.902934 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:55862.service: Deactivated successfully. Nov 5 04:49:13.905225 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 04:49:13.906074 systemd-logind[1625]: Session 6 logged out. Waiting for processes to exit. Nov 5 04:49:13.909524 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:55876.service - OpenSSH per-connection server daemon (10.0.0.1:55876). Nov 5 04:49:13.910413 systemd-logind[1625]: Removed session 6. Nov 5 04:49:13.987563 sshd[1820]: Accepted publickey for core from 10.0.0.1 port 55876 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:13.992465 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:14.004566 systemd-logind[1625]: New session 7 of user core. Nov 5 04:49:14.024163 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 04:49:14.084374 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 04:49:14.087862 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:49:14.089681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 04:49:14.093276 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:14.127644 sudo[1824]: pam_unix(sudo:session): session closed for user root Nov 5 04:49:14.132185 sshd[1823]: Connection closed by 10.0.0.1 port 55876 Nov 5 04:49:14.132618 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:14.149821 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:55876.service: Deactivated successfully. Nov 5 04:49:14.152494 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 04:49:14.153844 systemd-logind[1625]: Session 7 logged out. Waiting for processes to exit. Nov 5 04:49:14.160312 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:55884.service - OpenSSH per-connection server daemon (10.0.0.1:55884). Nov 5 04:49:14.161327 systemd-logind[1625]: Removed session 7. Nov 5 04:49:14.243237 sshd[1833]: Accepted publickey for core from 10.0.0.1 port 55884 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:14.245066 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:14.250404 systemd-logind[1625]: New session 8 of user core. Nov 5 04:49:14.266040 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 04:49:14.286672 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 04:49:14.287015 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:49:14.390412 sudo[1838]: pam_unix(sudo:session): session closed for user root Nov 5 04:49:14.400324 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 04:49:14.400716 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:49:14.417362 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 04:49:14.427111 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:14.432223 (kubelet)[1847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:49:14.513121 augenrules[1872]: No rules Nov 5 04:49:14.515277 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 04:49:14.515702 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 04:49:14.517382 sudo[1837]: pam_unix(sudo:session): session closed for user root Nov 5 04:49:14.520366 sshd[1836]: Connection closed by 10.0.0.1 port 55884 Nov 5 04:49:14.520799 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:14.533044 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:55884.service: Deactivated successfully. Nov 5 04:49:14.535474 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 04:49:14.536360 systemd-logind[1625]: Session 8 logged out. Waiting for processes to exit. Nov 5 04:49:14.539501 kubelet[1847]: E1105 04:49:14.539408 1847 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:49:14.539914 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:55890.service - OpenSSH per-connection server daemon (10.0.0.1:55890). Nov 5 04:49:14.540998 systemd-logind[1625]: Removed session 8. Nov 5 04:49:14.546774 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:49:14.546981 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:49:14.552013 systemd[1]: kubelet.service: Consumed 527ms CPU time, 112.3M memory peak. Nov 5 04:49:14.595600 sshd[1882]: Accepted publickey for core from 10.0.0.1 port 55890 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:49:14.597642 sshd-session[1882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:49:14.602609 systemd-logind[1625]: New session 9 of user core. Nov 5 04:49:14.611866 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 04:49:14.626927 sudo[1887]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 04:49:14.627225 sudo[1887]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 04:49:15.108938 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 04:49:15.125118 (dockerd)[1907]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 04:49:15.460579 dockerd[1907]: time="2025-11-05T04:49:15.460407334Z" level=info msg="Starting up" Nov 5 04:49:15.461370 dockerd[1907]: time="2025-11-05T04:49:15.461344371Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 04:49:15.479774 dockerd[1907]: time="2025-11-05T04:49:15.479707699Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 04:49:15.712670 dockerd[1907]: time="2025-11-05T04:49:15.712490861Z" level=info msg="Loading containers: start." Nov 5 04:49:15.725770 kernel: Initializing XFRM netlink socket Nov 5 04:49:15.997269 systemd-networkd[1534]: docker0: Link UP Nov 5 04:49:16.002527 dockerd[1907]: time="2025-11-05T04:49:16.002493379Z" level=info msg="Loading containers: done." Nov 5 04:49:16.019517 dockerd[1907]: time="2025-11-05T04:49:16.019466580Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 04:49:16.019649 dockerd[1907]: time="2025-11-05T04:49:16.019571807Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 04:49:16.019707 dockerd[1907]: time="2025-11-05T04:49:16.019675151Z" level=info msg="Initializing buildkit" Nov 5 04:49:16.049621 dockerd[1907]: time="2025-11-05T04:49:16.049597576Z" level=info msg="Completed buildkit initialization" Nov 5 04:49:16.055846 dockerd[1907]: time="2025-11-05T04:49:16.055818022Z" level=info msg="Daemon has completed initialization" Nov 5 04:49:16.055996 dockerd[1907]: time="2025-11-05T04:49:16.055926786Z" level=info msg="API listen on /run/docker.sock" Nov 5 04:49:16.056471 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 04:49:16.563789 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck626798846-merged.mount: Deactivated successfully. Nov 5 04:49:16.688431 containerd[1644]: time="2025-11-05T04:49:16.688336359Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 04:49:17.799212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount436707948.mount: Deactivated successfully. Nov 5 04:49:19.013487 containerd[1644]: time="2025-11-05T04:49:19.013389764Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:19.014187 containerd[1644]: time="2025-11-05T04:49:19.014127959Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=25393225" Nov 5 04:49:19.015298 containerd[1644]: time="2025-11-05T04:49:19.015263588Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:19.018136 containerd[1644]: time="2025-11-05T04:49:19.018067958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:19.019001 containerd[1644]: time="2025-11-05T04:49:19.018955372Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.330504007s" Nov 5 04:49:19.019044 containerd[1644]: time="2025-11-05T04:49:19.019014352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 5 04:49:19.019818 containerd[1644]: time="2025-11-05T04:49:19.019777924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 04:49:20.168914 containerd[1644]: time="2025-11-05T04:49:20.168836129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:20.169655 containerd[1644]: time="2025-11-05T04:49:20.169615590Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21151604" Nov 5 04:49:20.170677 containerd[1644]: time="2025-11-05T04:49:20.170639120Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:20.173143 containerd[1644]: time="2025-11-05T04:49:20.173109352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:20.174103 containerd[1644]: time="2025-11-05T04:49:20.174063462Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.154252355s" Nov 5 04:49:20.174197 containerd[1644]: time="2025-11-05T04:49:20.174102014Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 5 04:49:20.174621 containerd[1644]: time="2025-11-05T04:49:20.174596511Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 04:49:21.077987 containerd[1644]: time="2025-11-05T04:49:21.077921578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:21.078758 containerd[1644]: time="2025-11-05T04:49:21.078705839Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=0" Nov 5 04:49:21.079972 containerd[1644]: time="2025-11-05T04:49:21.079937449Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:21.082589 containerd[1644]: time="2025-11-05T04:49:21.082528338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:21.083557 containerd[1644]: time="2025-11-05T04:49:21.083520779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 908.896946ms" Nov 5 04:49:21.083557 containerd[1644]: time="2025-11-05T04:49:21.083557097Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 5 04:49:21.084136 containerd[1644]: time="2025-11-05T04:49:21.084070800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 04:49:22.430752 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573917751.mount: Deactivated successfully. Nov 5 04:49:22.644216 containerd[1644]: time="2025-11-05T04:49:22.644142361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:22.645134 containerd[1644]: time="2025-11-05T04:49:22.645050464Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25960977" Nov 5 04:49:22.646326 containerd[1644]: time="2025-11-05T04:49:22.646281743Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:22.648147 containerd[1644]: time="2025-11-05T04:49:22.648104682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:22.649019 containerd[1644]: time="2025-11-05T04:49:22.648968672Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.564857186s" Nov 5 04:49:22.650748 containerd[1644]: time="2025-11-05T04:49:22.649103646Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 5 04:49:22.651248 containerd[1644]: time="2025-11-05T04:49:22.651199105Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 04:49:23.254991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2916712408.mount: Deactivated successfully. Nov 5 04:49:23.985663 containerd[1644]: time="2025-11-05T04:49:23.985580667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:23.986458 containerd[1644]: time="2025-11-05T04:49:23.986416374Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=21568511" Nov 5 04:49:23.987689 containerd[1644]: time="2025-11-05T04:49:23.987646581Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:23.990472 containerd[1644]: time="2025-11-05T04:49:23.990407730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:23.991462 containerd[1644]: time="2025-11-05T04:49:23.991424667Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.33994788s" Nov 5 04:49:23.991513 containerd[1644]: time="2025-11-05T04:49:23.991461726Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 5 04:49:23.992110 containerd[1644]: time="2025-11-05T04:49:23.991930656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 04:49:24.544851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575667541.mount: Deactivated successfully. Nov 5 04:49:24.551769 containerd[1644]: time="2025-11-05T04:49:24.551686377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:24.552476 containerd[1644]: time="2025-11-05T04:49:24.552437286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=0" Nov 5 04:49:24.553841 containerd[1644]: time="2025-11-05T04:49:24.553800472Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:24.555836 containerd[1644]: time="2025-11-05T04:49:24.555800012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:24.556452 containerd[1644]: time="2025-11-05T04:49:24.556393274Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 564.434747ms" Nov 5 04:49:24.556452 containerd[1644]: time="2025-11-05T04:49:24.556439140Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 5 04:49:24.557128 containerd[1644]: time="2025-11-05T04:49:24.557096694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 04:49:24.797451 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 04:49:24.799325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:25.037934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:25.060239 (kubelet)[2266]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 04:49:25.133250 kubelet[2266]: E1105 04:49:25.133164 2266 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 04:49:25.137933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 04:49:25.138151 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 04:49:25.138577 systemd[1]: kubelet.service: Consumed 262ms CPU time, 110.8M memory peak. Nov 5 04:49:27.905646 containerd[1644]: time="2025-11-05T04:49:27.905577370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:27.906509 containerd[1644]: time="2025-11-05T04:49:27.906455016Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=72348001" Nov 5 04:49:27.907685 containerd[1644]: time="2025-11-05T04:49:27.907635189Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:27.910318 containerd[1644]: time="2025-11-05T04:49:27.910269800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:27.911352 containerd[1644]: time="2025-11-05T04:49:27.911308398Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.354178041s" Nov 5 04:49:27.911352 containerd[1644]: time="2025-11-05T04:49:27.911342021Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 5 04:49:30.958914 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:30.959093 systemd[1]: kubelet.service: Consumed 262ms CPU time, 110.8M memory peak. Nov 5 04:49:30.961770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:30.994911 systemd[1]: Reload requested from client PID 2345 ('systemctl') (unit session-9.scope)... Nov 5 04:49:30.994942 systemd[1]: Reloading... Nov 5 04:49:31.087765 zram_generator::config[2388]: No configuration found. Nov 5 04:49:31.355289 systemd[1]: Reloading finished in 359 ms. Nov 5 04:49:31.435481 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 04:49:31.435625 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 04:49:31.436072 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:31.436143 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.1M memory peak. Nov 5 04:49:31.438197 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:31.639143 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:31.659161 (kubelet)[2437]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 04:49:31.703312 kubelet[2437]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 04:49:31.703312 kubelet[2437]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:49:31.703760 kubelet[2437]: I1105 04:49:31.703338 2437 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 04:49:32.087106 kubelet[2437]: I1105 04:49:32.086909 2437 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 04:49:32.087106 kubelet[2437]: I1105 04:49:32.086956 2437 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 04:49:32.087106 kubelet[2437]: I1105 04:49:32.087004 2437 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 04:49:32.087477 kubelet[2437]: I1105 04:49:32.087014 2437 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 04:49:32.088761 kubelet[2437]: I1105 04:49:32.088300 2437 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 04:49:32.336528 kubelet[2437]: E1105 04:49:32.336479 2437 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 04:49:32.337581 kubelet[2437]: I1105 04:49:32.337366 2437 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:49:32.341100 kubelet[2437]: I1105 04:49:32.341065 2437 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 04:49:32.346690 kubelet[2437]: I1105 04:49:32.346662 2437 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 04:49:32.347466 kubelet[2437]: I1105 04:49:32.347414 2437 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 04:49:32.347619 kubelet[2437]: I1105 04:49:32.347453 2437 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 04:49:32.347789 kubelet[2437]: I1105 04:49:32.347633 2437 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 04:49:32.347789 kubelet[2437]: I1105 04:49:32.347643 2437 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 04:49:32.347789 kubelet[2437]: I1105 04:49:32.347784 2437 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 04:49:32.380923 kubelet[2437]: I1105 04:49:32.380887 2437 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:49:32.381695 kubelet[2437]: I1105 04:49:32.381664 2437 kubelet.go:475] "Attempting to sync node with API server" Nov 5 04:49:32.381695 kubelet[2437]: I1105 04:49:32.381690 2437 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 04:49:32.381776 kubelet[2437]: I1105 04:49:32.381730 2437 kubelet.go:387] "Adding apiserver pod source" Nov 5 04:49:32.381776 kubelet[2437]: I1105 04:49:32.381774 2437 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 04:49:32.384537 kubelet[2437]: E1105 04:49:32.384495 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 04:49:32.384585 kubelet[2437]: E1105 04:49:32.384555 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 04:49:32.385047 kubelet[2437]: I1105 04:49:32.385022 2437 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 5 04:49:32.387491 kubelet[2437]: I1105 04:49:32.386216 2437 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 04:49:32.387491 kubelet[2437]: I1105 04:49:32.386257 2437 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 04:49:32.387491 kubelet[2437]: W1105 04:49:32.386337 2437 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 04:49:32.392448 kubelet[2437]: I1105 04:49:32.392410 2437 server.go:1262] "Started kubelet" Nov 5 04:49:32.392888 kubelet[2437]: I1105 04:49:32.392833 2437 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 04:49:32.392935 kubelet[2437]: I1105 04:49:32.392908 2437 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 04:49:32.393546 kubelet[2437]: I1105 04:49:32.393515 2437 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 04:49:32.393546 kubelet[2437]: I1105 04:49:32.393546 2437 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 04:49:32.393955 kubelet[2437]: I1105 04:49:32.393918 2437 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 04:49:32.395619 kubelet[2437]: I1105 04:49:32.395586 2437 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 04:49:32.395885 kubelet[2437]: E1105 04:49:32.395855 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:32.396121 kubelet[2437]: I1105 04:49:32.396093 2437 server.go:310] "Adding debug handlers to kubelet server" Nov 5 04:49:32.396315 kubelet[2437]: I1105 04:49:32.396287 2437 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 04:49:32.396387 kubelet[2437]: I1105 04:49:32.396368 2437 reconciler.go:29] "Reconciler: start to sync state" Nov 5 04:49:32.396931 kubelet[2437]: I1105 04:49:32.396902 2437 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 04:49:32.397770 kubelet[2437]: E1105 04:49:32.397696 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Nov 5 04:49:32.398865 kubelet[2437]: E1105 04:49:32.397924 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 04:49:32.398865 kubelet[2437]: I1105 04:49:32.398255 2437 factory.go:223] Registration of the systemd container factory successfully Nov 5 04:49:32.398865 kubelet[2437]: I1105 04:49:32.398350 2437 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 04:49:32.399199 kubelet[2437]: E1105 04:49:32.397500 2437 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875030a66a9b34c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 04:49:32.39237102 +0000 UTC m=+0.728793279,LastTimestamp:2025-11-05 04:49:32.39237102 +0000 UTC m=+0.728793279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 04:49:32.400391 kubelet[2437]: E1105 04:49:32.400357 2437 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 04:49:32.400391 kubelet[2437]: I1105 04:49:32.400372 2437 factory.go:223] Registration of the containerd container factory successfully Nov 5 04:49:32.404851 kubelet[2437]: I1105 04:49:32.404809 2437 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 04:49:32.417179 kubelet[2437]: I1105 04:49:32.417147 2437 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 04:49:32.417179 kubelet[2437]: I1105 04:49:32.417165 2437 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 04:49:32.417179 kubelet[2437]: I1105 04:49:32.417186 2437 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:49:32.420527 kubelet[2437]: I1105 04:49:32.420484 2437 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 04:49:32.420527 kubelet[2437]: I1105 04:49:32.420534 2437 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 04:49:32.420654 kubelet[2437]: I1105 04:49:32.420579 2437 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 04:49:32.420654 kubelet[2437]: E1105 04:49:32.420631 2437 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 04:49:32.421827 kubelet[2437]: E1105 04:49:32.421255 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 04:49:32.453663 kubelet[2437]: I1105 04:49:32.453623 2437 policy_none.go:49] "None policy: Start" Nov 5 04:49:32.453663 kubelet[2437]: I1105 04:49:32.453666 2437 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 04:49:32.453797 kubelet[2437]: I1105 04:49:32.453689 2437 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 04:49:32.455250 kubelet[2437]: I1105 04:49:32.455218 2437 policy_none.go:47] "Start" Nov 5 04:49:32.459958 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 04:49:32.475630 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 04:49:32.479436 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 04:49:32.490820 kubelet[2437]: E1105 04:49:32.490787 2437 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 04:49:32.491127 kubelet[2437]: I1105 04:49:32.491082 2437 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 04:49:32.491127 kubelet[2437]: I1105 04:49:32.491105 2437 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 04:49:32.491438 kubelet[2437]: I1105 04:49:32.491410 2437 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 04:49:32.492589 kubelet[2437]: E1105 04:49:32.492554 2437 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 04:49:32.492641 kubelet[2437]: E1105 04:49:32.492622 2437 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 04:49:32.532944 systemd[1]: Created slice kubepods-burstable-pod5bbb9d14ed11949de6a62bbd59f3745a.slice - libcontainer container kubepods-burstable-pod5bbb9d14ed11949de6a62bbd59f3745a.slice. Nov 5 04:49:32.561298 kubelet[2437]: E1105 04:49:32.561268 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:32.564534 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 5 04:49:32.575944 kubelet[2437]: E1105 04:49:32.575900 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:32.578575 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 5 04:49:32.580322 kubelet[2437]: E1105 04:49:32.580285 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:32.592354 kubelet[2437]: I1105 04:49:32.592266 2437 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:49:32.592687 kubelet[2437]: E1105 04:49:32.592653 2437 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 5 04:49:32.599143 kubelet[2437]: E1105 04:49:32.599110 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Nov 5 04:49:32.697800 kubelet[2437]: I1105 04:49:32.697763 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb9d14ed11949de6a62bbd59f3745a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb9d14ed11949de6a62bbd59f3745a\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:32.697800 kubelet[2437]: I1105 04:49:32.697790 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb9d14ed11949de6a62bbd59f3745a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb9d14ed11949de6a62bbd59f3745a\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:32.697800 kubelet[2437]: I1105 04:49:32.697806 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbb9d14ed11949de6a62bbd59f3745a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5bbb9d14ed11949de6a62bbd59f3745a\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:32.697968 kubelet[2437]: I1105 04:49:32.697820 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:32.697968 kubelet[2437]: I1105 04:49:32.697835 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:32.697968 kubelet[2437]: I1105 04:49:32.697849 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:32.697968 kubelet[2437]: I1105 04:49:32.697942 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:32.698057 kubelet[2437]: I1105 04:49:32.697988 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:32.698057 kubelet[2437]: I1105 04:49:32.698008 2437 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:32.793870 kubelet[2437]: I1105 04:49:32.793841 2437 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:49:32.794275 kubelet[2437]: E1105 04:49:32.794247 2437 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 5 04:49:32.982049 kubelet[2437]: E1105 04:49:32.981847 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:32.983239 containerd[1644]: time="2025-11-05T04:49:32.983184095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5bbb9d14ed11949de6a62bbd59f3745a,Namespace:kube-system,Attempt:0,}" Nov 5 04:49:32.985604 kubelet[2437]: E1105 04:49:32.984940 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:32.985652 containerd[1644]: time="2025-11-05T04:49:32.985307728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 5 04:49:32.987922 kubelet[2437]: E1105 04:49:32.987881 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:32.988520 containerd[1644]: time="2025-11-05T04:49:32.988460761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 5 04:49:33.000563 kubelet[2437]: E1105 04:49:33.000499 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Nov 5 04:49:33.048410 kubelet[2437]: E1105 04:49:33.048247 2437 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875030a66a9b34c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 04:49:32.39237102 +0000 UTC m=+0.728793279,LastTimestamp:2025-11-05 04:49:32.39237102 +0000 UTC m=+0.728793279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 04:49:33.196532 kubelet[2437]: I1105 04:49:33.196453 2437 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:49:33.197071 kubelet[2437]: E1105 04:49:33.196997 2437 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 5 04:49:33.381897 kubelet[2437]: E1105 04:49:33.381704 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 04:49:33.476647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1973222484.mount: Deactivated successfully. Nov 5 04:49:33.483831 containerd[1644]: time="2025-11-05T04:49:33.483789263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:49:33.485781 containerd[1644]: time="2025-11-05T04:49:33.485714188Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 04:49:33.487891 containerd[1644]: time="2025-11-05T04:49:33.487829913Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:49:33.492610 containerd[1644]: time="2025-11-05T04:49:33.492521298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:49:33.494829 containerd[1644]: time="2025-11-05T04:49:33.494788054Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 04:49:33.495841 containerd[1644]: time="2025-11-05T04:49:33.495805067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:49:33.496352 containerd[1644]: time="2025-11-05T04:49:33.496313459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 507.645298ms" Nov 5 04:49:33.496914 containerd[1644]: time="2025-11-05T04:49:33.496874893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 04:49:33.497646 containerd[1644]: time="2025-11-05T04:49:33.497448539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 04:49:33.499533 containerd[1644]: time="2025-11-05T04:49:33.499490491Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.078955ms" Nov 5 04:49:33.501848 containerd[1644]: time="2025-11-05T04:49:33.501803025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 512.239165ms" Nov 5 04:49:33.520681 kubelet[2437]: E1105 04:49:33.520621 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 04:49:33.660155 containerd[1644]: time="2025-11-05T04:49:33.659996467Z" level=info msg="connecting to shim 794448ecf2872117c0087db82a3fcc44a8c4fdba2b3e82da09c6fa21eb6885d1" address="unix:///run/containerd/s/f03ae5d7e96d87298007ceb87192d0fab288022f7fe61fd51ee234004db6b5ca" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:33.756579 containerd[1644]: time="2025-11-05T04:49:33.755008410Z" level=info msg="connecting to shim 7167b700cfb2957d07e5e97261a8398b2d60cd323762172f5cd6e74cc97a5443" address="unix:///run/containerd/s/5ed9fbd7e470193ac153e80f170fc599cb3fc9d666899595aa190ce36f52d622" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:33.756579 containerd[1644]: time="2025-11-05T04:49:33.756530198Z" level=info msg="connecting to shim 15afbb038e51480c62c37aa5e6ea3684f035fc5f5b142706fcc8a7ad3288517c" address="unix:///run/containerd/s/37243190020416c4a7c00ba8ae2b9bd6429f746e5f4953b09fc21e379a66ff97" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:33.779917 systemd[1]: Started cri-containerd-794448ecf2872117c0087db82a3fcc44a8c4fdba2b3e82da09c6fa21eb6885d1.scope - libcontainer container 794448ecf2872117c0087db82a3fcc44a8c4fdba2b3e82da09c6fa21eb6885d1. Nov 5 04:49:33.787822 systemd[1]: Started cri-containerd-15afbb038e51480c62c37aa5e6ea3684f035fc5f5b142706fcc8a7ad3288517c.scope - libcontainer container 15afbb038e51480c62c37aa5e6ea3684f035fc5f5b142706fcc8a7ad3288517c. Nov 5 04:49:33.796913 systemd[1]: Started cri-containerd-7167b700cfb2957d07e5e97261a8398b2d60cd323762172f5cd6e74cc97a5443.scope - libcontainer container 7167b700cfb2957d07e5e97261a8398b2d60cd323762172f5cd6e74cc97a5443. Nov 5 04:49:33.801104 kubelet[2437]: E1105 04:49:33.801025 2437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="1.6s" Nov 5 04:49:33.802279 kubelet[2437]: E1105 04:49:33.802242 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 04:49:33.852320 containerd[1644]: time="2025-11-05T04:49:33.852181955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"15afbb038e51480c62c37aa5e6ea3684f035fc5f5b142706fcc8a7ad3288517c\"" Nov 5 04:49:33.853701 kubelet[2437]: E1105 04:49:33.853674 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:33.873792 containerd[1644]: time="2025-11-05T04:49:33.873372049Z" level=info msg="CreateContainer within sandbox \"15afbb038e51480c62c37aa5e6ea3684f035fc5f5b142706fcc8a7ad3288517c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 04:49:33.941423 containerd[1644]: time="2025-11-05T04:49:33.940895640Z" level=info msg="Container 70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:49:33.944263 kubelet[2437]: E1105 04:49:33.944218 2437 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 04:49:33.951302 containerd[1644]: time="2025-11-05T04:49:33.951255147Z" level=info msg="CreateContainer within sandbox \"15afbb038e51480c62c37aa5e6ea3684f035fc5f5b142706fcc8a7ad3288517c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77\"" Nov 5 04:49:33.951938 containerd[1644]: time="2025-11-05T04:49:33.951914530Z" level=info msg="StartContainer for \"70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77\"" Nov 5 04:49:33.953452 containerd[1644]: time="2025-11-05T04:49:33.953415026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5bbb9d14ed11949de6a62bbd59f3745a,Namespace:kube-system,Attempt:0,} returns sandbox id \"794448ecf2872117c0087db82a3fcc44a8c4fdba2b3e82da09c6fa21eb6885d1\"" Nov 5 04:49:33.954268 containerd[1644]: time="2025-11-05T04:49:33.954223006Z" level=info msg="connecting to shim 70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77" address="unix:///run/containerd/s/37243190020416c4a7c00ba8ae2b9bd6429f746e5f4953b09fc21e379a66ff97" protocol=ttrpc version=3 Nov 5 04:49:33.954665 kubelet[2437]: E1105 04:49:33.954624 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:33.957516 containerd[1644]: time="2025-11-05T04:49:33.957461948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"7167b700cfb2957d07e5e97261a8398b2d60cd323762172f5cd6e74cc97a5443\"" Nov 5 04:49:33.958130 kubelet[2437]: E1105 04:49:33.958078 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:33.959854 containerd[1644]: time="2025-11-05T04:49:33.959397174Z" level=info msg="CreateContainer within sandbox \"794448ecf2872117c0087db82a3fcc44a8c4fdba2b3e82da09c6fa21eb6885d1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 04:49:33.962820 containerd[1644]: time="2025-11-05T04:49:33.962786436Z" level=info msg="CreateContainer within sandbox \"7167b700cfb2957d07e5e97261a8398b2d60cd323762172f5cd6e74cc97a5443\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 04:49:33.969759 containerd[1644]: time="2025-11-05T04:49:33.969299048Z" level=info msg="Container b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:49:33.977003 systemd[1]: Started cri-containerd-70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77.scope - libcontainer container 70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77. Nov 5 04:49:33.978991 containerd[1644]: time="2025-11-05T04:49:33.978945128Z" level=info msg="Container 5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:49:33.982305 containerd[1644]: time="2025-11-05T04:49:33.982253103Z" level=info msg="CreateContainer within sandbox \"794448ecf2872117c0087db82a3fcc44a8c4fdba2b3e82da09c6fa21eb6885d1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd\"" Nov 5 04:49:33.982857 containerd[1644]: time="2025-11-05T04:49:33.982833082Z" level=info msg="StartContainer for \"b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd\"" Nov 5 04:49:33.984540 containerd[1644]: time="2025-11-05T04:49:33.984495080Z" level=info msg="connecting to shim b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd" address="unix:///run/containerd/s/f03ae5d7e96d87298007ceb87192d0fab288022f7fe61fd51ee234004db6b5ca" protocol=ttrpc version=3 Nov 5 04:49:33.987459 containerd[1644]: time="2025-11-05T04:49:33.987426740Z" level=info msg="CreateContainer within sandbox \"7167b700cfb2957d07e5e97261a8398b2d60cd323762172f5cd6e74cc97a5443\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca\"" Nov 5 04:49:33.988391 containerd[1644]: time="2025-11-05T04:49:33.988370582Z" level=info msg="StartContainer for \"5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca\"" Nov 5 04:49:33.990211 containerd[1644]: time="2025-11-05T04:49:33.990189172Z" level=info msg="connecting to shim 5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca" address="unix:///run/containerd/s/5ed9fbd7e470193ac153e80f170fc599cb3fc9d666899595aa190ce36f52d622" protocol=ttrpc version=3 Nov 5 04:49:34.000036 kubelet[2437]: I1105 04:49:34.000005 2437 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:49:34.001173 kubelet[2437]: E1105 04:49:34.001146 2437 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Nov 5 04:49:34.021978 systemd[1]: Started cri-containerd-5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca.scope - libcontainer container 5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca. Nov 5 04:49:34.024795 systemd[1]: Started cri-containerd-b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd.scope - libcontainer container b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd. Nov 5 04:49:34.045514 containerd[1644]: time="2025-11-05T04:49:34.045400463Z" level=info msg="StartContainer for \"70fae39f1151c3ee859ea8c49b1a3eb2884f61e4d2d9476583291725ce3d9c77\" returns successfully" Nov 5 04:49:34.093729 containerd[1644]: time="2025-11-05T04:49:34.093658411Z" level=info msg="StartContainer for \"b4aad2ce30c4ecf9f6e8c36833fda52dcc7dbd0b5fa102e66c001d46ee03c6cd\" returns successfully" Nov 5 04:49:34.110429 containerd[1644]: time="2025-11-05T04:49:34.110364250Z" level=info msg="StartContainer for \"5bd41420a9895dcac545960f8d231005b32e9f73d2a6b5cda207d33034632aca\" returns successfully" Nov 5 04:49:34.428932 kubelet[2437]: E1105 04:49:34.428756 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:34.428932 kubelet[2437]: E1105 04:49:34.428879 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:34.432444 kubelet[2437]: E1105 04:49:34.432415 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:34.432538 kubelet[2437]: E1105 04:49:34.432516 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:34.433812 kubelet[2437]: E1105 04:49:34.433789 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:34.433898 kubelet[2437]: E1105 04:49:34.433877 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:35.438269 kubelet[2437]: E1105 04:49:35.438212 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:35.438765 kubelet[2437]: E1105 04:49:35.438406 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:35.440914 kubelet[2437]: E1105 04:49:35.440882 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:35.441037 kubelet[2437]: E1105 04:49:35.441011 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:35.605061 kubelet[2437]: I1105 04:49:35.604451 2437 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:49:35.864568 kubelet[2437]: E1105 04:49:35.863864 2437 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 04:49:36.039726 kubelet[2437]: I1105 04:49:36.039656 2437 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 04:49:36.039726 kubelet[2437]: E1105 04:49:36.039717 2437 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 04:49:36.049235 kubelet[2437]: E1105 04:49:36.049187 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.149829 kubelet[2437]: E1105 04:49:36.149652 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.250785 kubelet[2437]: E1105 04:49:36.250711 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.332408 kubelet[2437]: E1105 04:49:36.332358 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:36.332654 kubelet[2437]: E1105 04:49:36.332621 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:36.351850 kubelet[2437]: E1105 04:49:36.351808 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.439941 kubelet[2437]: E1105 04:49:36.439837 2437 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 04:49:36.440282 kubelet[2437]: E1105 04:49:36.440046 2437 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:36.452516 kubelet[2437]: E1105 04:49:36.452471 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.553308 kubelet[2437]: E1105 04:49:36.553245 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.654126 kubelet[2437]: E1105 04:49:36.654065 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.754308 kubelet[2437]: E1105 04:49:36.754163 2437 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 04:49:36.797307 kubelet[2437]: I1105 04:49:36.797249 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:36.802806 kubelet[2437]: E1105 04:49:36.802726 2437 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:36.802806 kubelet[2437]: I1105 04:49:36.802782 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:36.804378 kubelet[2437]: E1105 04:49:36.804349 2437 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:36.804378 kubelet[2437]: I1105 04:49:36.804372 2437 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:36.805635 kubelet[2437]: E1105 04:49:36.805613 2437 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:37.385063 kubelet[2437]: I1105 04:49:37.385003 2437 apiserver.go:52] "Watching apiserver" Nov 5 04:49:37.397390 kubelet[2437]: I1105 04:49:37.397361 2437 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 04:49:38.088879 systemd[1]: Reload requested from client PID 2725 ('systemctl') (unit session-9.scope)... Nov 5 04:49:38.088900 systemd[1]: Reloading... Nov 5 04:49:38.169779 zram_generator::config[2772]: No configuration found. Nov 5 04:49:38.455246 systemd[1]: Reloading finished in 365 ms. Nov 5 04:49:38.491079 kubelet[2437]: I1105 04:49:38.491006 2437 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:49:38.491391 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:38.519604 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 04:49:38.520079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:38.520157 systemd[1]: kubelet.service: Consumed 969ms CPU time, 124.8M memory peak. Nov 5 04:49:38.523134 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 04:49:38.742960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 04:49:38.755062 (kubelet)[2814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 04:49:38.805213 kubelet[2814]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 04:49:38.805213 kubelet[2814]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 04:49:38.805620 kubelet[2814]: I1105 04:49:38.805240 2814 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 04:49:38.813566 kubelet[2814]: I1105 04:49:38.813521 2814 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 04:49:38.813566 kubelet[2814]: I1105 04:49:38.813552 2814 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 04:49:38.813678 kubelet[2814]: I1105 04:49:38.813584 2814 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 04:49:38.813678 kubelet[2814]: I1105 04:49:38.813591 2814 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 04:49:38.813863 kubelet[2814]: I1105 04:49:38.813835 2814 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 04:49:38.815058 kubelet[2814]: I1105 04:49:38.815030 2814 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 04:49:38.816982 kubelet[2814]: I1105 04:49:38.816949 2814 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 04:49:38.820180 kubelet[2814]: I1105 04:49:38.820134 2814 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 04:49:38.828439 kubelet[2814]: I1105 04:49:38.828415 2814 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 04:49:38.828681 kubelet[2814]: I1105 04:49:38.828653 2814 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 04:49:38.828834 kubelet[2814]: I1105 04:49:38.828679 2814 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 04:49:38.828931 kubelet[2814]: I1105 04:49:38.828840 2814 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 04:49:38.828931 kubelet[2814]: I1105 04:49:38.828848 2814 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 04:49:38.828931 kubelet[2814]: I1105 04:49:38.828872 2814 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 04:49:38.829535 kubelet[2814]: I1105 04:49:38.829511 2814 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:49:38.829758 kubelet[2814]: I1105 04:49:38.829672 2814 kubelet.go:475] "Attempting to sync node with API server" Nov 5 04:49:38.829758 kubelet[2814]: I1105 04:49:38.829690 2814 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 04:49:38.829758 kubelet[2814]: I1105 04:49:38.829720 2814 kubelet.go:387] "Adding apiserver pod source" Nov 5 04:49:38.829841 kubelet[2814]: I1105 04:49:38.829768 2814 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 04:49:38.833025 kubelet[2814]: I1105 04:49:38.833009 2814 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.1.4" apiVersion="v1" Nov 5 04:49:38.833774 kubelet[2814]: I1105 04:49:38.833758 2814 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 04:49:38.833895 kubelet[2814]: I1105 04:49:38.833867 2814 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 04:49:38.836609 kubelet[2814]: I1105 04:49:38.836591 2814 server.go:1262] "Started kubelet" Nov 5 04:49:38.838255 kubelet[2814]: I1105 04:49:38.837821 2814 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 04:49:38.838255 kubelet[2814]: I1105 04:49:38.837908 2814 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 04:49:38.838255 kubelet[2814]: I1105 04:49:38.838090 2814 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 04:49:38.838255 kubelet[2814]: I1105 04:49:38.838156 2814 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 04:49:38.838255 kubelet[2814]: I1105 04:49:38.838209 2814 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 04:49:38.839301 kubelet[2814]: I1105 04:49:38.839278 2814 server.go:310] "Adding debug handlers to kubelet server" Nov 5 04:49:38.841854 kubelet[2814]: I1105 04:49:38.841797 2814 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 04:49:38.845027 kubelet[2814]: I1105 04:49:38.844995 2814 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 04:49:38.845162 kubelet[2814]: I1105 04:49:38.845150 2814 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 04:49:38.846682 kubelet[2814]: I1105 04:49:38.846652 2814 reconciler.go:29] "Reconciler: start to sync state" Nov 5 04:49:38.846934 kubelet[2814]: E1105 04:49:38.846875 2814 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 04:49:38.850204 kubelet[2814]: I1105 04:49:38.850149 2814 factory.go:223] Registration of the systemd container factory successfully Nov 5 04:49:38.850342 kubelet[2814]: I1105 04:49:38.850305 2814 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 04:49:38.852248 kubelet[2814]: I1105 04:49:38.852199 2814 factory.go:223] Registration of the containerd container factory successfully Nov 5 04:49:38.856515 kubelet[2814]: I1105 04:49:38.856438 2814 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 04:49:38.857766 kubelet[2814]: I1105 04:49:38.857711 2814 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 04:49:38.857766 kubelet[2814]: I1105 04:49:38.857752 2814 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 04:49:38.857962 kubelet[2814]: I1105 04:49:38.857781 2814 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 04:49:38.857962 kubelet[2814]: E1105 04:49:38.857828 2814 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 04:49:38.888005 kubelet[2814]: I1105 04:49:38.887970 2814 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 04:49:38.888005 kubelet[2814]: I1105 04:49:38.887989 2814 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 04:49:38.888005 kubelet[2814]: I1105 04:49:38.888016 2814 state_mem.go:36] "Initialized new in-memory state store" Nov 5 04:49:38.888247 kubelet[2814]: I1105 04:49:38.888149 2814 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 04:49:38.888247 kubelet[2814]: I1105 04:49:38.888159 2814 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 04:49:38.888247 kubelet[2814]: I1105 04:49:38.888186 2814 policy_none.go:49] "None policy: Start" Nov 5 04:49:38.888247 kubelet[2814]: I1105 04:49:38.888197 2814 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 04:49:38.888247 kubelet[2814]: I1105 04:49:38.888208 2814 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 04:49:38.888390 kubelet[2814]: I1105 04:49:38.888351 2814 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 04:49:38.888390 kubelet[2814]: I1105 04:49:38.888369 2814 policy_none.go:47] "Start" Nov 5 04:49:38.892729 kubelet[2814]: E1105 04:49:38.892702 2814 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 04:49:38.892945 kubelet[2814]: I1105 04:49:38.892925 2814 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 04:49:38.892994 kubelet[2814]: I1105 04:49:38.892942 2814 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 04:49:38.893273 kubelet[2814]: I1105 04:49:38.893247 2814 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 04:49:38.893945 kubelet[2814]: E1105 04:49:38.893929 2814 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 04:49:38.959092 kubelet[2814]: I1105 04:49:38.959041 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:38.959092 kubelet[2814]: I1105 04:49:38.959098 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:38.959347 kubelet[2814]: I1105 04:49:38.959136 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:38.998641 kubelet[2814]: I1105 04:49:38.998511 2814 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 04:49:39.005840 kubelet[2814]: I1105 04:49:39.005769 2814 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 04:49:39.006029 kubelet[2814]: I1105 04:49:39.005867 2814 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 04:49:39.048237 kubelet[2814]: I1105 04:49:39.048181 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb9d14ed11949de6a62bbd59f3745a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb9d14ed11949de6a62bbd59f3745a\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:39.048237 kubelet[2814]: I1105 04:49:39.048236 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5bbb9d14ed11949de6a62bbd59f3745a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5bbb9d14ed11949de6a62bbd59f3745a\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:39.048515 kubelet[2814]: I1105 04:49:39.048293 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.048515 kubelet[2814]: I1105 04:49:39.048347 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.048515 kubelet[2814]: I1105 04:49:39.048429 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:39.048515 kubelet[2814]: I1105 04:49:39.048473 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.048705 kubelet[2814]: I1105 04:49:39.048528 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.048705 kubelet[2814]: I1105 04:49:39.048555 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.048705 kubelet[2814]: I1105 04:49:39.048595 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5bbb9d14ed11949de6a62bbd59f3745a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5bbb9d14ed11949de6a62bbd59f3745a\") " pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:39.266222 kubelet[2814]: E1105 04:49:39.266018 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:39.266342 kubelet[2814]: E1105 04:49:39.266219 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:39.266658 kubelet[2814]: E1105 04:49:39.266623 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:39.831271 kubelet[2814]: I1105 04:49:39.831136 2814 apiserver.go:52] "Watching apiserver" Nov 5 04:49:39.846115 kubelet[2814]: I1105 04:49:39.846061 2814 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 04:49:39.875409 kubelet[2814]: I1105 04:49:39.874250 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:39.875409 kubelet[2814]: I1105 04:49:39.874366 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.875409 kubelet[2814]: I1105 04:49:39.874613 2814 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:39.884220 kubelet[2814]: E1105 04:49:39.884118 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 04:49:39.884759 kubelet[2814]: E1105 04:49:39.884684 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 04:49:39.885700 kubelet[2814]: E1105 04:49:39.884953 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:39.885700 kubelet[2814]: E1105 04:49:39.885529 2814 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 04:49:39.885700 kubelet[2814]: E1105 04:49:39.885646 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:39.885857 kubelet[2814]: E1105 04:49:39.885781 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:39.912762 kubelet[2814]: I1105 04:49:39.912671 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.912652858 podStartE2EDuration="1.912652858s" podCreationTimestamp="2025-11-05 04:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:49:39.903552823 +0000 UTC m=+1.143861586" watchObservedRunningTime="2025-11-05 04:49:39.912652858 +0000 UTC m=+1.152961621" Nov 5 04:49:39.924333 kubelet[2814]: I1105 04:49:39.924259 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.924240873 podStartE2EDuration="1.924240873s" podCreationTimestamp="2025-11-05 04:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:49:39.924240593 +0000 UTC m=+1.164549356" watchObservedRunningTime="2025-11-05 04:49:39.924240873 +0000 UTC m=+1.164549636" Nov 5 04:49:39.924722 kubelet[2814]: I1105 04:49:39.924375 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.924366984 podStartE2EDuration="1.924366984s" podCreationTimestamp="2025-11-05 04:49:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:49:39.913002177 +0000 UTC m=+1.153310940" watchObservedRunningTime="2025-11-05 04:49:39.924366984 +0000 UTC m=+1.164675747" Nov 5 04:49:40.876276 kubelet[2814]: E1105 04:49:40.876222 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:40.876795 kubelet[2814]: E1105 04:49:40.876330 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:40.876795 kubelet[2814]: E1105 04:49:40.876550 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:41.878184 kubelet[2814]: E1105 04:49:41.878144 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:45.236946 update_engine[1630]: I20251105 04:49:45.236825 1630 update_attempter.cc:509] Updating boot flags... Nov 5 04:49:45.372553 kubelet[2814]: I1105 04:49:45.372379 2814 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 04:49:45.375315 containerd[1644]: time="2025-11-05T04:49:45.375262762Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 04:49:45.375726 kubelet[2814]: I1105 04:49:45.375588 2814 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 04:49:45.834587 systemd[1]: Created slice kubepods-besteffort-podc2f85557_9747_4320_936c_fdcdda7ff229.slice - libcontainer container kubepods-besteffort-podc2f85557_9747_4320_936c_fdcdda7ff229.slice. Nov 5 04:49:45.890315 kubelet[2814]: I1105 04:49:45.890226 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2f85557-9747-4320-936c-fdcdda7ff229-lib-modules\") pod \"kube-proxy-jlkt5\" (UID: \"c2f85557-9747-4320-936c-fdcdda7ff229\") " pod="kube-system/kube-proxy-jlkt5" Nov 5 04:49:45.890315 kubelet[2814]: I1105 04:49:45.890286 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c2f85557-9747-4320-936c-fdcdda7ff229-kube-proxy\") pod \"kube-proxy-jlkt5\" (UID: \"c2f85557-9747-4320-936c-fdcdda7ff229\") " pod="kube-system/kube-proxy-jlkt5" Nov 5 04:49:45.890315 kubelet[2814]: I1105 04:49:45.890311 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2f85557-9747-4320-936c-fdcdda7ff229-xtables-lock\") pod \"kube-proxy-jlkt5\" (UID: \"c2f85557-9747-4320-936c-fdcdda7ff229\") " pod="kube-system/kube-proxy-jlkt5" Nov 5 04:49:45.890315 kubelet[2814]: I1105 04:49:45.890328 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rnfj\" (UniqueName: \"kubernetes.io/projected/c2f85557-9747-4320-936c-fdcdda7ff229-kube-api-access-2rnfj\") pod \"kube-proxy-jlkt5\" (UID: \"c2f85557-9747-4320-936c-fdcdda7ff229\") " pod="kube-system/kube-proxy-jlkt5" Nov 5 04:49:45.995948 kubelet[2814]: E1105 04:49:45.995904 2814 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 04:49:45.995948 kubelet[2814]: E1105 04:49:45.995934 2814 projected.go:196] Error preparing data for projected volume kube-api-access-2rnfj for pod kube-system/kube-proxy-jlkt5: configmap "kube-root-ca.crt" not found Nov 5 04:49:45.996133 kubelet[2814]: E1105 04:49:45.996007 2814 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c2f85557-9747-4320-936c-fdcdda7ff229-kube-api-access-2rnfj podName:c2f85557-9747-4320-936c-fdcdda7ff229 nodeName:}" failed. No retries permitted until 2025-11-05 04:49:46.495980082 +0000 UTC m=+7.736288845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2rnfj" (UniqueName: "kubernetes.io/projected/c2f85557-9747-4320-936c-fdcdda7ff229-kube-api-access-2rnfj") pod "kube-proxy-jlkt5" (UID: "c2f85557-9747-4320-936c-fdcdda7ff229") : configmap "kube-root-ca.crt" not found Nov 5 04:49:46.433867 systemd[1]: Created slice kubepods-besteffort-pod3a35b6f0_9a65_43a4_aeef_da87fa982f32.slice - libcontainer container kubepods-besteffort-pod3a35b6f0_9a65_43a4_aeef_da87fa982f32.slice. Nov 5 04:49:46.495029 kubelet[2814]: I1105 04:49:46.494983 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-872pf\" (UniqueName: \"kubernetes.io/projected/3a35b6f0-9a65-43a4-aeef-da87fa982f32-kube-api-access-872pf\") pod \"tigera-operator-65cdcdfd6d-87rlv\" (UID: \"3a35b6f0-9a65-43a4-aeef-da87fa982f32\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-87rlv" Nov 5 04:49:46.495029 kubelet[2814]: I1105 04:49:46.495021 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a35b6f0-9a65-43a4-aeef-da87fa982f32-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-87rlv\" (UID: \"3a35b6f0-9a65-43a4-aeef-da87fa982f32\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-87rlv" Nov 5 04:49:46.741111 containerd[1644]: time="2025-11-05T04:49:46.741005640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-87rlv,Uid:3a35b6f0-9a65-43a4-aeef-da87fa982f32,Namespace:tigera-operator,Attempt:0,}" Nov 5 04:49:46.746834 kubelet[2814]: E1105 04:49:46.746792 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:46.747341 containerd[1644]: time="2025-11-05T04:49:46.747276590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jlkt5,Uid:c2f85557-9747-4320-936c-fdcdda7ff229,Namespace:kube-system,Attempt:0,}" Nov 5 04:49:46.773424 containerd[1644]: time="2025-11-05T04:49:46.773211333Z" level=info msg="connecting to shim 97e9b20ce6f52e1ef2379336cd3d65120c044addbf85bdbd0d3d87016b048702" address="unix:///run/containerd/s/b76cb4a6dd75e495028c793641ee12bd424d7f07a254bc770f4383a22688f008" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:46.787053 containerd[1644]: time="2025-11-05T04:49:46.787010779Z" level=info msg="connecting to shim 9fe6b52b4a5ebe21c10ef55d7b2d9afc74b672c7b50f746af862846a801c82be" address="unix:///run/containerd/s/d33d1d3d214139368275ced9cf45bb9bb5a6bf7308282152846e40f538576538" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:46.846880 systemd[1]: Started cri-containerd-97e9b20ce6f52e1ef2379336cd3d65120c044addbf85bdbd0d3d87016b048702.scope - libcontainer container 97e9b20ce6f52e1ef2379336cd3d65120c044addbf85bdbd0d3d87016b048702. Nov 5 04:49:46.850895 systemd[1]: Started cri-containerd-9fe6b52b4a5ebe21c10ef55d7b2d9afc74b672c7b50f746af862846a801c82be.scope - libcontainer container 9fe6b52b4a5ebe21c10ef55d7b2d9afc74b672c7b50f746af862846a801c82be. Nov 5 04:49:46.881295 containerd[1644]: time="2025-11-05T04:49:46.881159518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jlkt5,Uid:c2f85557-9747-4320-936c-fdcdda7ff229,Namespace:kube-system,Attempt:0,} returns sandbox id \"9fe6b52b4a5ebe21c10ef55d7b2d9afc74b672c7b50f746af862846a801c82be\"" Nov 5 04:49:46.882355 kubelet[2814]: E1105 04:49:46.882317 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:46.894407 containerd[1644]: time="2025-11-05T04:49:46.894363945Z" level=info msg="CreateContainer within sandbox \"9fe6b52b4a5ebe21c10ef55d7b2d9afc74b672c7b50f746af862846a801c82be\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 04:49:46.905197 containerd[1644]: time="2025-11-05T04:49:46.905149948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-87rlv,Uid:3a35b6f0-9a65-43a4-aeef-da87fa982f32,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"97e9b20ce6f52e1ef2379336cd3d65120c044addbf85bdbd0d3d87016b048702\"" Nov 5 04:49:46.908330 containerd[1644]: time="2025-11-05T04:49:46.908281085Z" level=info msg="Container 3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:49:46.912329 containerd[1644]: time="2025-11-05T04:49:46.912272986Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 04:49:46.917486 containerd[1644]: time="2025-11-05T04:49:46.917449617Z" level=info msg="CreateContainer within sandbox \"9fe6b52b4a5ebe21c10ef55d7b2d9afc74b672c7b50f746af862846a801c82be\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995\"" Nov 5 04:49:46.917952 containerd[1644]: time="2025-11-05T04:49:46.917918287Z" level=info msg="StartContainer for \"3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995\"" Nov 5 04:49:46.919783 containerd[1644]: time="2025-11-05T04:49:46.919632803Z" level=info msg="connecting to shim 3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995" address="unix:///run/containerd/s/d33d1d3d214139368275ced9cf45bb9bb5a6bf7308282152846e40f538576538" protocol=ttrpc version=3 Nov 5 04:49:46.949897 systemd[1]: Started cri-containerd-3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995.scope - libcontainer container 3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995. Nov 5 04:49:46.994097 containerd[1644]: time="2025-11-05T04:49:46.994004532Z" level=info msg="StartContainer for \"3774bea6caa00dc8d251f85207b68c92685e3b5f2fae61c22083846a4dec4995\" returns successfully" Nov 5 04:49:47.773336 kubelet[2814]: E1105 04:49:47.773249 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:47.893122 kubelet[2814]: E1105 04:49:47.893080 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:47.894758 kubelet[2814]: E1105 04:49:47.894694 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:47.910520 kubelet[2814]: I1105 04:49:47.910284 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jlkt5" podStartSLOduration=2.910268415 podStartE2EDuration="2.910268415s" podCreationTimestamp="2025-11-05 04:49:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:49:47.901301409 +0000 UTC m=+9.141610172" watchObservedRunningTime="2025-11-05 04:49:47.910268415 +0000 UTC m=+9.150577178" Nov 5 04:49:48.427971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4097310190.mount: Deactivated successfully. Nov 5 04:49:48.902401 kubelet[2814]: E1105 04:49:48.902367 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:48.903858 kubelet[2814]: E1105 04:49:48.903819 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:48.918243 containerd[1644]: time="2025-11-05T04:49:48.918201654Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:48.919002 containerd[1644]: time="2025-11-05T04:49:48.918954180Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=23558205" Nov 5 04:49:48.920101 containerd[1644]: time="2025-11-05T04:49:48.920049898Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:48.926073 containerd[1644]: time="2025-11-05T04:49:48.926032433Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:49:48.926995 containerd[1644]: time="2025-11-05T04:49:48.926712021Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.014402396s" Nov 5 04:49:48.926995 containerd[1644]: time="2025-11-05T04:49:48.926758219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 04:49:48.931715 containerd[1644]: time="2025-11-05T04:49:48.931672687Z" level=info msg="CreateContainer within sandbox \"97e9b20ce6f52e1ef2379336cd3d65120c044addbf85bdbd0d3d87016b048702\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 04:49:48.940116 containerd[1644]: time="2025-11-05T04:49:48.940063559Z" level=info msg="Container 8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:49:48.947435 containerd[1644]: time="2025-11-05T04:49:48.947396964Z" level=info msg="CreateContainer within sandbox \"97e9b20ce6f52e1ef2379336cd3d65120c044addbf85bdbd0d3d87016b048702\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c\"" Nov 5 04:49:48.947868 containerd[1644]: time="2025-11-05T04:49:48.947843401Z" level=info msg="StartContainer for \"8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c\"" Nov 5 04:49:48.948761 containerd[1644]: time="2025-11-05T04:49:48.948712239Z" level=info msg="connecting to shim 8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c" address="unix:///run/containerd/s/b76cb4a6dd75e495028c793641ee12bd424d7f07a254bc770f4383a22688f008" protocol=ttrpc version=3 Nov 5 04:49:48.977889 systemd[1]: Started cri-containerd-8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c.scope - libcontainer container 8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c. Nov 5 04:49:49.008772 containerd[1644]: time="2025-11-05T04:49:49.008697825Z" level=info msg="StartContainer for \"8f3736246b94d265f7f69f6e39c17425443ebf887bb6d7a26787b122afd4694c\" returns successfully" Nov 5 04:49:49.657359 kubelet[2814]: E1105 04:49:49.657307 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:51.164053 kubelet[2814]: E1105 04:49:51.163984 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:51.177998 kubelet[2814]: I1105 04:49:51.177881 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-87rlv" podStartSLOduration=3.156380358 podStartE2EDuration="5.177861624s" podCreationTimestamp="2025-11-05 04:49:46 +0000 UTC" firstStartedPulling="2025-11-05 04:49:46.906327534 +0000 UTC m=+8.146636298" lastFinishedPulling="2025-11-05 04:49:48.927808801 +0000 UTC m=+10.168117564" observedRunningTime="2025-11-05 04:49:49.915592121 +0000 UTC m=+11.155900884" watchObservedRunningTime="2025-11-05 04:49:51.177861624 +0000 UTC m=+12.418170387" Nov 5 04:49:54.306363 sudo[1887]: pam_unix(sudo:session): session closed for user root Nov 5 04:49:54.309356 sshd[1886]: Connection closed by 10.0.0.1 port 55890 Nov 5 04:49:54.311026 sshd-session[1882]: pam_unix(sshd:session): session closed for user core Nov 5 04:49:54.323433 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:55890.service: Deactivated successfully. Nov 5 04:49:54.331135 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 04:49:54.334842 systemd[1]: session-9.scope: Consumed 5.871s CPU time, 222.4M memory peak. Nov 5 04:49:54.345056 systemd-logind[1625]: Session 9 logged out. Waiting for processes to exit. Nov 5 04:49:54.347236 systemd-logind[1625]: Removed session 9. Nov 5 04:49:58.923970 systemd[1]: Created slice kubepods-besteffort-pod627cec5f_54a1_41a1_95b8_a02d46769662.slice - libcontainer container kubepods-besteffort-pod627cec5f_54a1_41a1_95b8_a02d46769662.slice. Nov 5 04:49:58.975481 kubelet[2814]: I1105 04:49:58.975165 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/627cec5f-54a1-41a1-95b8-a02d46769662-typha-certs\") pod \"calico-typha-77d8656dc7-rsl2j\" (UID: \"627cec5f-54a1-41a1-95b8-a02d46769662\") " pod="calico-system/calico-typha-77d8656dc7-rsl2j" Nov 5 04:49:58.975481 kubelet[2814]: I1105 04:49:58.975225 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/627cec5f-54a1-41a1-95b8-a02d46769662-tigera-ca-bundle\") pod \"calico-typha-77d8656dc7-rsl2j\" (UID: \"627cec5f-54a1-41a1-95b8-a02d46769662\") " pod="calico-system/calico-typha-77d8656dc7-rsl2j" Nov 5 04:49:58.975481 kubelet[2814]: I1105 04:49:58.975247 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b74wz\" (UniqueName: \"kubernetes.io/projected/627cec5f-54a1-41a1-95b8-a02d46769662-kube-api-access-b74wz\") pod \"calico-typha-77d8656dc7-rsl2j\" (UID: \"627cec5f-54a1-41a1-95b8-a02d46769662\") " pod="calico-system/calico-typha-77d8656dc7-rsl2j" Nov 5 04:49:58.986213 systemd[1]: Created slice kubepods-besteffort-pod5e1d9d01_5a2e_4f05_99fb_157453da453c.slice - libcontainer container kubepods-besteffort-pod5e1d9d01_5a2e_4f05_99fb_157453da453c.slice. Nov 5 04:49:59.075795 kubelet[2814]: I1105 04:49:59.075702 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-var-lib-calico\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.075795 kubelet[2814]: I1105 04:49:59.075776 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq69l\" (UniqueName: \"kubernetes.io/projected/5e1d9d01-5a2e-4f05-99fb-157453da453c-kube-api-access-tq69l\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076057 kubelet[2814]: I1105 04:49:59.075825 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-cni-net-dir\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076057 kubelet[2814]: I1105 04:49:59.075855 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5e1d9d01-5a2e-4f05-99fb-157453da453c-node-certs\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076057 kubelet[2814]: I1105 04:49:59.075896 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-flexvol-driver-host\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076057 kubelet[2814]: I1105 04:49:59.075934 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-var-run-calico\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076245 kubelet[2814]: I1105 04:49:59.076144 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-policysync\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076335 kubelet[2814]: I1105 04:49:59.076287 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-cni-bin-dir\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076335 kubelet[2814]: I1105 04:49:59.076307 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-cni-log-dir\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076335 kubelet[2814]: I1105 04:49:59.076322 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-lib-modules\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076486 kubelet[2814]: I1105 04:49:59.076343 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5e1d9d01-5a2e-4f05-99fb-157453da453c-tigera-ca-bundle\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.076486 kubelet[2814]: I1105 04:49:59.076362 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e1d9d01-5a2e-4f05-99fb-157453da453c-xtables-lock\") pod \"calico-node-d7mkr\" (UID: \"5e1d9d01-5a2e-4f05-99fb-157453da453c\") " pod="calico-system/calico-node-d7mkr" Nov 5 04:49:59.180864 kubelet[2814]: E1105 04:49:59.180230 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:49:59.180864 kubelet[2814]: E1105 04:49:59.180652 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.180864 kubelet[2814]: W1105 04:49:59.180684 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.180864 kubelet[2814]: E1105 04:49:59.180718 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.183946 kubelet[2814]: E1105 04:49:59.183891 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.183946 kubelet[2814]: W1105 04:49:59.183928 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.184038 kubelet[2814]: E1105 04:49:59.183953 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.189891 kubelet[2814]: E1105 04:49:59.189852 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.195062 kubelet[2814]: W1105 04:49:59.189878 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.195062 kubelet[2814]: E1105 04:49:59.194917 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.196506 kubelet[2814]: E1105 04:49:59.196480 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.196506 kubelet[2814]: W1105 04:49:59.196499 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.196604 kubelet[2814]: E1105 04:49:59.196515 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.235335 kubelet[2814]: E1105 04:49:59.235272 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:59.236045 containerd[1644]: time="2025-11-05T04:49:59.235971062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77d8656dc7-rsl2j,Uid:627cec5f-54a1-41a1-95b8-a02d46769662,Namespace:calico-system,Attempt:0,}" Nov 5 04:49:59.271041 containerd[1644]: time="2025-11-05T04:49:59.270965889Z" level=info msg="connecting to shim 3f37100aa3d959284c2d45c8007f765de31f25a27514fcf0529d40edf506f521" address="unix:///run/containerd/s/750c1eeed46dee698651d09c1198c2754b455bd89b9564892907b396b1396926" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:59.274418 kubelet[2814]: E1105 04:49:59.274364 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.274418 kubelet[2814]: W1105 04:49:59.274394 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.274515 kubelet[2814]: E1105 04:49:59.274418 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.274755 kubelet[2814]: E1105 04:49:59.274719 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.274896 kubelet[2814]: W1105 04:49:59.274789 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.274896 kubelet[2814]: E1105 04:49:59.274804 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.275305 kubelet[2814]: E1105 04:49:59.275076 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.275305 kubelet[2814]: W1105 04:49:59.275086 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.275305 kubelet[2814]: E1105 04:49:59.275097 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.275419 kubelet[2814]: E1105 04:49:59.275397 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.275419 kubelet[2814]: W1105 04:49:59.275413 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.275571 kubelet[2814]: E1105 04:49:59.275446 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.275692 kubelet[2814]: E1105 04:49:59.275671 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.275692 kubelet[2814]: W1105 04:49:59.275683 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.275692 kubelet[2814]: E1105 04:49:59.275692 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.276066 kubelet[2814]: E1105 04:49:59.276039 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.276138 kubelet[2814]: W1105 04:49:59.276071 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.276138 kubelet[2814]: E1105 04:49:59.276083 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.276448 kubelet[2814]: E1105 04:49:59.276429 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.276448 kubelet[2814]: W1105 04:49:59.276443 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.276510 kubelet[2814]: E1105 04:49:59.276453 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.276720 kubelet[2814]: E1105 04:49:59.276686 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.276720 kubelet[2814]: W1105 04:49:59.276714 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.276827 kubelet[2814]: E1105 04:49:59.276754 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.277078 kubelet[2814]: E1105 04:49:59.277061 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.277078 kubelet[2814]: W1105 04:49:59.277073 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.277169 kubelet[2814]: E1105 04:49:59.277083 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.277313 kubelet[2814]: E1105 04:49:59.277279 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.277313 kubelet[2814]: W1105 04:49:59.277290 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.277313 kubelet[2814]: E1105 04:49:59.277313 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.277527 kubelet[2814]: E1105 04:49:59.277513 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.277527 kubelet[2814]: W1105 04:49:59.277523 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.277620 kubelet[2814]: E1105 04:49:59.277531 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.278259 kubelet[2814]: E1105 04:49:59.278239 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.278259 kubelet[2814]: W1105 04:49:59.278251 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.278361 kubelet[2814]: E1105 04:49:59.278269 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.278909 kubelet[2814]: E1105 04:49:59.278893 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.278909 kubelet[2814]: W1105 04:49:59.278905 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.278965 kubelet[2814]: E1105 04:49:59.278915 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.279238 kubelet[2814]: E1105 04:49:59.279127 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.279238 kubelet[2814]: W1105 04:49:59.279139 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.279238 kubelet[2814]: E1105 04:49:59.279151 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.279418 kubelet[2814]: E1105 04:49:59.279405 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.279490 kubelet[2814]: W1105 04:49:59.279469 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.279556 kubelet[2814]: E1105 04:49:59.279544 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.279819 kubelet[2814]: E1105 04:49:59.279801 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.279945 kubelet[2814]: W1105 04:49:59.279905 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.279945 kubelet[2814]: E1105 04:49:59.279929 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.280754 kubelet[2814]: E1105 04:49:59.280720 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.280867 kubelet[2814]: W1105 04:49:59.280814 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.280867 kubelet[2814]: E1105 04:49:59.280829 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.281232 kubelet[2814]: E1105 04:49:59.281166 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.281232 kubelet[2814]: W1105 04:49:59.281180 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.281232 kubelet[2814]: E1105 04:49:59.281190 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.281561 kubelet[2814]: E1105 04:49:59.281547 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.281754 kubelet[2814]: W1105 04:49:59.281619 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.281754 kubelet[2814]: E1105 04:49:59.281632 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.281910 kubelet[2814]: E1105 04:49:59.281899 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.281971 kubelet[2814]: W1105 04:49:59.281960 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.282041 kubelet[2814]: E1105 04:49:59.282019 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.282473 kubelet[2814]: E1105 04:49:59.282433 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.282473 kubelet[2814]: W1105 04:49:59.282446 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.282473 kubelet[2814]: E1105 04:49:59.282455 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.282636 kubelet[2814]: I1105 04:49:59.282614 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5vq\" (UniqueName: \"kubernetes.io/projected/f4956568-400e-4c71-8a7d-11217f3b2032-kube-api-access-mj5vq\") pod \"csi-node-driver-zml8c\" (UID: \"f4956568-400e-4c71-8a7d-11217f3b2032\") " pod="calico-system/csi-node-driver-zml8c" Nov 5 04:49:59.283035 kubelet[2814]: E1105 04:49:59.282989 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.283035 kubelet[2814]: W1105 04:49:59.283004 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.283035 kubelet[2814]: E1105 04:49:59.283020 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.283380 kubelet[2814]: E1105 04:49:59.283362 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.283380 kubelet[2814]: W1105 04:49:59.283376 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.283451 kubelet[2814]: E1105 04:49:59.283386 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.283667 kubelet[2814]: E1105 04:49:59.283650 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.283667 kubelet[2814]: W1105 04:49:59.283662 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.283731 kubelet[2814]: E1105 04:49:59.283671 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.283731 kubelet[2814]: I1105 04:49:59.283705 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f4956568-400e-4c71-8a7d-11217f3b2032-kubelet-dir\") pod \"csi-node-driver-zml8c\" (UID: \"f4956568-400e-4c71-8a7d-11217f3b2032\") " pod="calico-system/csi-node-driver-zml8c" Nov 5 04:49:59.284039 kubelet[2814]: E1105 04:49:59.284004 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.284039 kubelet[2814]: W1105 04:49:59.284023 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.284212 kubelet[2814]: E1105 04:49:59.284050 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.284212 kubelet[2814]: I1105 04:49:59.284087 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f4956568-400e-4c71-8a7d-11217f3b2032-registration-dir\") pod \"csi-node-driver-zml8c\" (UID: \"f4956568-400e-4c71-8a7d-11217f3b2032\") " pod="calico-system/csi-node-driver-zml8c" Nov 5 04:49:59.284781 kubelet[2814]: E1105 04:49:59.284337 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.284781 kubelet[2814]: W1105 04:49:59.284358 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.284781 kubelet[2814]: E1105 04:49:59.284371 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.284781 kubelet[2814]: I1105 04:49:59.284404 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f4956568-400e-4c71-8a7d-11217f3b2032-varrun\") pod \"csi-node-driver-zml8c\" (UID: \"f4956568-400e-4c71-8a7d-11217f3b2032\") " pod="calico-system/csi-node-driver-zml8c" Nov 5 04:49:59.284981 kubelet[2814]: E1105 04:49:59.284922 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.284981 kubelet[2814]: W1105 04:49:59.284971 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.285071 kubelet[2814]: E1105 04:49:59.284988 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.285071 kubelet[2814]: I1105 04:49:59.285033 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f4956568-400e-4c71-8a7d-11217f3b2032-socket-dir\") pod \"csi-node-driver-zml8c\" (UID: \"f4956568-400e-4c71-8a7d-11217f3b2032\") " pod="calico-system/csi-node-driver-zml8c" Nov 5 04:49:59.285555 kubelet[2814]: E1105 04:49:59.285363 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.285555 kubelet[2814]: W1105 04:49:59.285378 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.285555 kubelet[2814]: E1105 04:49:59.285392 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.286493 kubelet[2814]: E1105 04:49:59.286441 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.286562 kubelet[2814]: W1105 04:49:59.286474 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.286562 kubelet[2814]: E1105 04:49:59.286527 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.286896 kubelet[2814]: E1105 04:49:59.286876 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.286896 kubelet[2814]: W1105 04:49:59.286891 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.286991 kubelet[2814]: E1105 04:49:59.286901 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.287343 kubelet[2814]: E1105 04:49:59.287324 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.287343 kubelet[2814]: W1105 04:49:59.287337 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.287430 kubelet[2814]: E1105 04:49:59.287347 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.287642 kubelet[2814]: E1105 04:49:59.287622 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.287642 kubelet[2814]: W1105 04:49:59.287635 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.287642 kubelet[2814]: E1105 04:49:59.287644 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.290140 kubelet[2814]: E1105 04:49:59.290098 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.290211 kubelet[2814]: W1105 04:49:59.290153 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.290211 kubelet[2814]: E1105 04:49:59.290167 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.290951 kubelet[2814]: E1105 04:49:59.290653 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.290951 kubelet[2814]: W1105 04:49:59.290669 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.290951 kubelet[2814]: E1105 04:49:59.290679 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.291881 kubelet[2814]: E1105 04:49:59.291853 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.292264 kubelet[2814]: W1105 04:49:59.292229 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.292264 kubelet[2814]: E1105 04:49:59.292252 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.296814 kubelet[2814]: E1105 04:49:59.296715 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:59.298935 containerd[1644]: time="2025-11-05T04:49:59.298817376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7mkr,Uid:5e1d9d01-5a2e-4f05-99fb-157453da453c,Namespace:calico-system,Attempt:0,}" Nov 5 04:49:59.326939 systemd[1]: Started cri-containerd-3f37100aa3d959284c2d45c8007f765de31f25a27514fcf0529d40edf506f521.scope - libcontainer container 3f37100aa3d959284c2d45c8007f765de31f25a27514fcf0529d40edf506f521. Nov 5 04:49:59.337334 containerd[1644]: time="2025-11-05T04:49:59.337291263Z" level=info msg="connecting to shim b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16" address="unix:///run/containerd/s/1517526321b6e52cb27085e09aa47a03158f5a283663e38384f1ffeb87999a69" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:49:59.383315 systemd[1]: Started cri-containerd-b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16.scope - libcontainer container b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16. Nov 5 04:49:59.385719 kubelet[2814]: E1105 04:49:59.385659 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.385719 kubelet[2814]: W1105 04:49:59.385691 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.385719 kubelet[2814]: E1105 04:49:59.385715 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.386439 kubelet[2814]: E1105 04:49:59.386412 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.386486 kubelet[2814]: W1105 04:49:59.386449 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.386486 kubelet[2814]: E1105 04:49:59.386463 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.387455 kubelet[2814]: E1105 04:49:59.387265 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.387455 kubelet[2814]: W1105 04:49:59.387290 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.387455 kubelet[2814]: E1105 04:49:59.387314 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.387596 kubelet[2814]: E1105 04:49:59.387576 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.387596 kubelet[2814]: W1105 04:49:59.387592 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.387652 kubelet[2814]: E1105 04:49:59.387605 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.388022 kubelet[2814]: E1105 04:49:59.387855 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.388022 kubelet[2814]: W1105 04:49:59.387868 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.388022 kubelet[2814]: E1105 04:49:59.387879 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.388171 kubelet[2814]: E1105 04:49:59.388152 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.388171 kubelet[2814]: W1105 04:49:59.388167 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.388242 kubelet[2814]: E1105 04:49:59.388180 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.389047 kubelet[2814]: E1105 04:49:59.389008 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.389047 kubelet[2814]: W1105 04:49:59.389038 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.389123 kubelet[2814]: E1105 04:49:59.389052 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.389481 kubelet[2814]: E1105 04:49:59.389454 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.389481 kubelet[2814]: W1105 04:49:59.389471 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.389559 kubelet[2814]: E1105 04:49:59.389485 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.390271 kubelet[2814]: E1105 04:49:59.390246 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.390310 kubelet[2814]: W1105 04:49:59.390282 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.390310 kubelet[2814]: E1105 04:49:59.390295 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.390586 kubelet[2814]: E1105 04:49:59.390571 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.390631 kubelet[2814]: W1105 04:49:59.390583 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.390631 kubelet[2814]: E1105 04:49:59.390618 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.393825 kubelet[2814]: E1105 04:49:59.393793 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.393825 kubelet[2814]: W1105 04:49:59.393814 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.393955 kubelet[2814]: E1105 04:49:59.393853 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.394534 kubelet[2814]: E1105 04:49:59.394511 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.394534 kubelet[2814]: W1105 04:49:59.394527 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.394534 kubelet[2814]: E1105 04:49:59.394537 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.396125 kubelet[2814]: E1105 04:49:59.396085 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.397454 kubelet[2814]: W1105 04:49:59.397419 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.397515 kubelet[2814]: E1105 04:49:59.397459 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.397943 kubelet[2814]: E1105 04:49:59.397922 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.397995 kubelet[2814]: W1105 04:49:59.397949 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.398034 kubelet[2814]: E1105 04:49:59.398007 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.399045 kubelet[2814]: E1105 04:49:59.398857 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.399045 kubelet[2814]: W1105 04:49:59.398872 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.399478 kubelet[2814]: E1105 04:49:59.399454 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.403837 kubelet[2814]: E1105 04:49:59.403814 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.404537 kubelet[2814]: W1105 04:49:59.404507 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.405112 kubelet[2814]: E1105 04:49:59.404852 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.406844 kubelet[2814]: E1105 04:49:59.406820 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.406844 kubelet[2814]: W1105 04:49:59.406840 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.406923 kubelet[2814]: E1105 04:49:59.406854 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.407214 kubelet[2814]: E1105 04:49:59.407076 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.407214 kubelet[2814]: W1105 04:49:59.407086 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.407214 kubelet[2814]: E1105 04:49:59.407096 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.407625 kubelet[2814]: E1105 04:49:59.407293 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.407625 kubelet[2814]: W1105 04:49:59.407308 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.407625 kubelet[2814]: E1105 04:49:59.407317 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.407625 kubelet[2814]: E1105 04:49:59.407503 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.407625 kubelet[2814]: W1105 04:49:59.407514 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.407625 kubelet[2814]: E1105 04:49:59.407524 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.408608 kubelet[2814]: E1105 04:49:59.408567 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.408608 kubelet[2814]: W1105 04:49:59.408600 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.408683 kubelet[2814]: E1105 04:49:59.408642 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.409087 kubelet[2814]: E1105 04:49:59.409056 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.409087 kubelet[2814]: W1105 04:49:59.409071 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.409087 kubelet[2814]: E1105 04:49:59.409081 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.413909 kubelet[2814]: E1105 04:49:59.413886 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.414230 kubelet[2814]: W1105 04:49:59.413972 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.414230 kubelet[2814]: E1105 04:49:59.413992 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.415944 kubelet[2814]: E1105 04:49:59.415927 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.416056 kubelet[2814]: W1105 04:49:59.416040 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.416170 kubelet[2814]: E1105 04:49:59.416154 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.416596 kubelet[2814]: E1105 04:49:59.416582 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.416782 kubelet[2814]: W1105 04:49:59.416729 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.416892 kubelet[2814]: E1105 04:49:59.416824 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.483921 kubelet[2814]: E1105 04:49:59.483796 2814 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 04:49:59.483921 kubelet[2814]: W1105 04:49:59.483826 2814 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 04:49:59.483921 kubelet[2814]: E1105 04:49:59.483847 2814 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 04:49:59.508598 containerd[1644]: time="2025-11-05T04:49:59.508501771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d7mkr,Uid:5e1d9d01-5a2e-4f05-99fb-157453da453c,Namespace:calico-system,Attempt:0,} returns sandbox id \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\"" Nov 5 04:49:59.510255 kubelet[2814]: E1105 04:49:59.510165 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:59.513817 containerd[1644]: time="2025-11-05T04:49:59.513754596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77d8656dc7-rsl2j,Uid:627cec5f-54a1-41a1-95b8-a02d46769662,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f37100aa3d959284c2d45c8007f765de31f25a27514fcf0529d40edf506f521\"" Nov 5 04:49:59.514689 kubelet[2814]: E1105 04:49:59.514641 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:49:59.516435 containerd[1644]: time="2025-11-05T04:49:59.516355009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 04:50:00.858937 kubelet[2814]: E1105 04:50:00.858883 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:01.399155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158093977.mount: Deactivated successfully. Nov 5 04:50:02.499324 containerd[1644]: time="2025-11-05T04:50:02.499261724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:02.501666 containerd[1644]: time="2025-11-05T04:50:02.501030135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:02.502098 containerd[1644]: time="2025-11-05T04:50:02.502052722Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:02.504480 containerd[1644]: time="2025-11-05T04:50:02.504416244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:02.505179 containerd[1644]: time="2025-11-05T04:50:02.505145137Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.988731889s" Nov 5 04:50:02.505225 containerd[1644]: time="2025-11-05T04:50:02.505180614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 04:50:02.506719 containerd[1644]: time="2025-11-05T04:50:02.506679588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 04:50:02.514520 containerd[1644]: time="2025-11-05T04:50:02.514474472Z" level=info msg="CreateContainer within sandbox \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 04:50:02.528787 containerd[1644]: time="2025-11-05T04:50:02.528717123Z" level=info msg="Container a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:50:02.542251 containerd[1644]: time="2025-11-05T04:50:02.542199100Z" level=info msg="CreateContainer within sandbox \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e\"" Nov 5 04:50:02.543001 containerd[1644]: time="2025-11-05T04:50:02.542947399Z" level=info msg="StartContainer for \"a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e\"" Nov 5 04:50:02.544680 containerd[1644]: time="2025-11-05T04:50:02.544653974Z" level=info msg="connecting to shim a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e" address="unix:///run/containerd/s/1517526321b6e52cb27085e09aa47a03158f5a283663e38384f1ffeb87999a69" protocol=ttrpc version=3 Nov 5 04:50:02.576892 systemd[1]: Started cri-containerd-a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e.scope - libcontainer container a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e. Nov 5 04:50:02.623243 containerd[1644]: time="2025-11-05T04:50:02.623181845Z" level=info msg="StartContainer for \"a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e\" returns successfully" Nov 5 04:50:02.634122 systemd[1]: cri-containerd-a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e.scope: Deactivated successfully. Nov 5 04:50:02.636146 containerd[1644]: time="2025-11-05T04:50:02.636109810Z" level=info msg="received exit event container_id:\"a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e\" id:\"a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e\" pid:3440 exited_at:{seconds:1762318202 nanos:635548071}" Nov 5 04:50:02.661108 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a953e4f9d13155db14fadc5eece729c85e1e7237473d9203b9e762c5a4dc560e-rootfs.mount: Deactivated successfully. Nov 5 04:50:02.858987 kubelet[2814]: E1105 04:50:02.858838 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:02.945561 kubelet[2814]: E1105 04:50:02.945371 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:04.743619 containerd[1644]: time="2025-11-05T04:50:04.743548222Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:04.744406 containerd[1644]: time="2025-11-05T04:50:04.744371953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33735893" Nov 5 04:50:04.745618 containerd[1644]: time="2025-11-05T04:50:04.745566993Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:04.747795 containerd[1644]: time="2025-11-05T04:50:04.747763708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:04.748363 containerd[1644]: time="2025-11-05T04:50:04.748307011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.241592358s" Nov 5 04:50:04.748363 containerd[1644]: time="2025-11-05T04:50:04.748353640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 04:50:04.749630 containerd[1644]: time="2025-11-05T04:50:04.749581551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 04:50:04.762701 containerd[1644]: time="2025-11-05T04:50:04.762312059Z" level=info msg="CreateContainer within sandbox \"3f37100aa3d959284c2d45c8007f765de31f25a27514fcf0529d40edf506f521\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 04:50:04.770708 containerd[1644]: time="2025-11-05T04:50:04.770648054Z" level=info msg="Container 213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:50:04.777555 containerd[1644]: time="2025-11-05T04:50:04.777502601Z" level=info msg="CreateContainer within sandbox \"3f37100aa3d959284c2d45c8007f765de31f25a27514fcf0529d40edf506f521\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722\"" Nov 5 04:50:04.778177 containerd[1644]: time="2025-11-05T04:50:04.778068867Z" level=info msg="StartContainer for \"213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722\"" Nov 5 04:50:04.779114 containerd[1644]: time="2025-11-05T04:50:04.779088917Z" level=info msg="connecting to shim 213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722" address="unix:///run/containerd/s/750c1eeed46dee698651d09c1198c2754b455bd89b9564892907b396b1396926" protocol=ttrpc version=3 Nov 5 04:50:04.801900 systemd[1]: Started cri-containerd-213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722.scope - libcontainer container 213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722. Nov 5 04:50:04.850815 containerd[1644]: time="2025-11-05T04:50:04.850758115Z" level=info msg="StartContainer for \"213734f4e030080d115124a67af2318f4185cbc3889d36f1651ab8282bebd722\" returns successfully" Nov 5 04:50:04.860116 kubelet[2814]: E1105 04:50:04.860057 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:04.952571 kubelet[2814]: E1105 04:50:04.952500 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:04.975333 kubelet[2814]: I1105 04:50:04.975224 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77d8656dc7-rsl2j" podStartSLOduration=1.7417700059999999 podStartE2EDuration="6.975206696s" podCreationTimestamp="2025-11-05 04:49:58 +0000 UTC" firstStartedPulling="2025-11-05 04:49:59.515876246 +0000 UTC m=+20.756185009" lastFinishedPulling="2025-11-05 04:50:04.749312936 +0000 UTC m=+25.989621699" observedRunningTime="2025-11-05 04:50:04.974884108 +0000 UTC m=+26.215192871" watchObservedRunningTime="2025-11-05 04:50:04.975206696 +0000 UTC m=+26.215515459" Nov 5 04:50:05.951818 kubelet[2814]: I1105 04:50:05.951766 2814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 04:50:05.952296 kubelet[2814]: E1105 04:50:05.952123 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:06.860345 kubelet[2814]: E1105 04:50:06.860273 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:08.858652 kubelet[2814]: E1105 04:50:08.858589 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:09.447182 containerd[1644]: time="2025-11-05T04:50:09.447104849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:09.450902 containerd[1644]: time="2025-11-05T04:50:09.450839564Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70442291" Nov 5 04:50:09.452645 containerd[1644]: time="2025-11-05T04:50:09.452595516Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:09.454455 containerd[1644]: time="2025-11-05T04:50:09.454420518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:09.455014 containerd[1644]: time="2025-11-05T04:50:09.454967457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.705343376s" Nov 5 04:50:09.455014 containerd[1644]: time="2025-11-05T04:50:09.455002803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 04:50:09.459930 containerd[1644]: time="2025-11-05T04:50:09.459886468Z" level=info msg="CreateContainer within sandbox \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 04:50:09.468416 containerd[1644]: time="2025-11-05T04:50:09.468369382Z" level=info msg="Container 69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:50:09.476535 containerd[1644]: time="2025-11-05T04:50:09.476479886Z" level=info msg="CreateContainer within sandbox \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7\"" Nov 5 04:50:09.477080 containerd[1644]: time="2025-11-05T04:50:09.477040320Z" level=info msg="StartContainer for \"69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7\"" Nov 5 04:50:09.478610 containerd[1644]: time="2025-11-05T04:50:09.478579314Z" level=info msg="connecting to shim 69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7" address="unix:///run/containerd/s/1517526321b6e52cb27085e09aa47a03158f5a283663e38384f1ffeb87999a69" protocol=ttrpc version=3 Nov 5 04:50:09.506911 systemd[1]: Started cri-containerd-69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7.scope - libcontainer container 69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7. Nov 5 04:50:09.548522 containerd[1644]: time="2025-11-05T04:50:09.548483145Z" level=info msg="StartContainer for \"69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7\" returns successfully" Nov 5 04:50:09.965425 kubelet[2814]: E1105 04:50:09.965365 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:10.663268 systemd[1]: cri-containerd-69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7.scope: Deactivated successfully. Nov 5 04:50:10.663692 systemd[1]: cri-containerd-69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7.scope: Consumed 597ms CPU time, 178.4M memory peak, 3.6M read from disk, 171.3M written to disk. Nov 5 04:50:10.664531 containerd[1644]: time="2025-11-05T04:50:10.664480332Z" level=info msg="received exit event container_id:\"69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7\" id:\"69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7\" pid:3541 exited_at:{seconds:1762318210 nanos:664195748}" Nov 5 04:50:10.691574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69e7a9048a825dee8cac83b99ea037ed7061c3f2ee591003e8d2ca7d11ecc4d7-rootfs.mount: Deactivated successfully. Nov 5 04:50:10.716604 kubelet[2814]: I1105 04:50:10.716555 2814 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 04:50:10.865406 systemd[1]: Created slice kubepods-besteffort-podf4956568_400e_4c71_8a7d_11217f3b2032.slice - libcontainer container kubepods-besteffort-podf4956568_400e_4c71_8a7d_11217f3b2032.slice. Nov 5 04:50:10.908105 containerd[1644]: time="2025-11-05T04:50:10.907867519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zml8c,Uid:f4956568-400e-4c71-8a7d-11217f3b2032,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:10.930774 systemd[1]: Created slice kubepods-burstable-pod5c3560b7_8a2f_467b_a954_04af37b9cb69.slice - libcontainer container kubepods-burstable-pod5c3560b7_8a2f_467b_a954_04af37b9cb69.slice. Nov 5 04:50:10.948005 systemd[1]: Created slice kubepods-burstable-pod745e342d_8061_4ed4_94ef_e773f5826da7.slice - libcontainer container kubepods-burstable-pod745e342d_8061_4ed4_94ef_e773f5826da7.slice. Nov 5 04:50:10.957586 systemd[1]: Created slice kubepods-besteffort-podc1cae229_f08d_4b80_be11_a077ed5ab750.slice - libcontainer container kubepods-besteffort-podc1cae229_f08d_4b80_be11_a077ed5ab750.slice. Nov 5 04:50:10.966021 systemd[1]: Created slice kubepods-besteffort-pod28221097_2724_4244_b685_bd415dc30351.slice - libcontainer container kubepods-besteffort-pod28221097_2724_4244_b685_bd415dc30351.slice. Nov 5 04:50:10.976322 systemd[1]: Created slice kubepods-besteffort-pod6c9d1710_ea97_4e77_b4cf_83ba2c2c8004.slice - libcontainer container kubepods-besteffort-pod6c9d1710_ea97_4e77_b4cf_83ba2c2c8004.slice. Nov 5 04:50:10.983069 kubelet[2814]: E1105 04:50:10.983031 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:10.984127 systemd[1]: Created slice kubepods-besteffort-pod42c91c94_d899_482d_a358_2c80842231e1.slice - libcontainer container kubepods-besteffort-pod42c91c94_d899_482d_a358_2c80842231e1.slice. Nov 5 04:50:10.984919 containerd[1644]: time="2025-11-05T04:50:10.984881175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 04:50:10.991888 systemd[1]: Created slice kubepods-besteffort-pod6f45e524_4d73_4922_a936_2a28ed045d2c.slice - libcontainer container kubepods-besteffort-pod6f45e524_4d73_4922_a936_2a28ed045d2c.slice. Nov 5 04:50:11.033988 containerd[1644]: time="2025-11-05T04:50:11.033916562Z" level=error msg="Failed to destroy network for sandbox \"46d9d793a9aafb5faa9b3665dc57f57602d98e87b8aec079b9aaf36d44a0180f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.038972 systemd[1]: run-netns-cni\x2d98b496ec\x2dc29b\x2da021\x2db82a\x2dbde766ff099c.mount: Deactivated successfully. Nov 5 04:50:11.044939 containerd[1644]: time="2025-11-05T04:50:11.044858022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zml8c,Uid:f4956568-400e-4c71-8a7d-11217f3b2032,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9d793a9aafb5faa9b3665dc57f57602d98e87b8aec079b9aaf36d44a0180f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.046937 kubelet[2814]: E1105 04:50:11.046868 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9d793a9aafb5faa9b3665dc57f57602d98e87b8aec079b9aaf36d44a0180f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.046998 kubelet[2814]: E1105 04:50:11.046967 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9d793a9aafb5faa9b3665dc57f57602d98e87b8aec079b9aaf36d44a0180f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zml8c" Nov 5 04:50:11.047036 kubelet[2814]: E1105 04:50:11.046999 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d9d793a9aafb5faa9b3665dc57f57602d98e87b8aec079b9aaf36d44a0180f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zml8c" Nov 5 04:50:11.047697 kubelet[2814]: E1105 04:50:11.047062 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46d9d793a9aafb5faa9b3665dc57f57602d98e87b8aec079b9aaf36d44a0180f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:11.068204 kubelet[2814]: I1105 04:50:11.068144 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmj6\" (UniqueName: \"kubernetes.io/projected/c1cae229-f08d-4b80-be11-a077ed5ab750-kube-api-access-dbmj6\") pod \"calico-kube-controllers-75b5b9c475-hhsfh\" (UID: \"c1cae229-f08d-4b80-be11-a077ed5ab750\") " pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" Nov 5 04:50:11.068204 kubelet[2814]: I1105 04:50:11.068200 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28221097-2724-4244-b685-bd415dc30351-calico-apiserver-certs\") pod \"calico-apiserver-5c9fdcbf84-kfpz4\" (UID: \"28221097-2724-4244-b685-bd415dc30351\") " pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" Nov 5 04:50:11.068408 kubelet[2814]: I1105 04:50:11.068242 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/42c91c94-d899-482d-a358-2c80842231e1-goldmane-key-pair\") pod \"goldmane-7c778bb748-cqvt5\" (UID: \"42c91c94-d899-482d-a358-2c80842231e1\") " pod="calico-system/goldmane-7c778bb748-cqvt5" Nov 5 04:50:11.068408 kubelet[2814]: I1105 04:50:11.068260 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxkt4\" (UniqueName: \"kubernetes.io/projected/42c91c94-d899-482d-a358-2c80842231e1-kube-api-access-vxkt4\") pod \"goldmane-7c778bb748-cqvt5\" (UID: \"42c91c94-d899-482d-a358-2c80842231e1\") " pod="calico-system/goldmane-7c778bb748-cqvt5" Nov 5 04:50:11.068408 kubelet[2814]: I1105 04:50:11.068278 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c3560b7-8a2f-467b-a954-04af37b9cb69-config-volume\") pod \"coredns-66bc5c9577-dw9s5\" (UID: \"5c3560b7-8a2f-467b-a954-04af37b9cb69\") " pod="kube-system/coredns-66bc5c9577-dw9s5" Nov 5 04:50:11.068408 kubelet[2814]: I1105 04:50:11.068332 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-backend-key-pair\") pod \"whisker-74cbf7bd55-z2lfr\" (UID: \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\") " pod="calico-system/whisker-74cbf7bd55-z2lfr" Nov 5 04:50:11.068509 kubelet[2814]: I1105 04:50:11.068432 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9wj\" (UniqueName: \"kubernetes.io/projected/5c3560b7-8a2f-467b-a954-04af37b9cb69-kube-api-access-rf9wj\") pod \"coredns-66bc5c9577-dw9s5\" (UID: \"5c3560b7-8a2f-467b-a954-04af37b9cb69\") " pod="kube-system/coredns-66bc5c9577-dw9s5" Nov 5 04:50:11.068509 kubelet[2814]: I1105 04:50:11.068449 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gfgb\" (UniqueName: \"kubernetes.io/projected/745e342d-8061-4ed4-94ef-e773f5826da7-kube-api-access-7gfgb\") pod \"coredns-66bc5c9577-swdbh\" (UID: \"745e342d-8061-4ed4-94ef-e773f5826da7\") " pod="kube-system/coredns-66bc5c9577-swdbh" Nov 5 04:50:11.068509 kubelet[2814]: I1105 04:50:11.068468 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdffn\" (UniqueName: \"kubernetes.io/projected/28221097-2724-4244-b685-bd415dc30351-kube-api-access-pdffn\") pod \"calico-apiserver-5c9fdcbf84-kfpz4\" (UID: \"28221097-2724-4244-b685-bd415dc30351\") " pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" Nov 5 04:50:11.068583 kubelet[2814]: I1105 04:50:11.068519 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfg2b\" (UniqueName: \"kubernetes.io/projected/6f45e524-4d73-4922-a936-2a28ed045d2c-kube-api-access-xfg2b\") pod \"calico-apiserver-5c9fdcbf84-drbnr\" (UID: \"6f45e524-4d73-4922-a936-2a28ed045d2c\") " pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" Nov 5 04:50:11.068583 kubelet[2814]: I1105 04:50:11.068541 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/42c91c94-d899-482d-a358-2c80842231e1-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-cqvt5\" (UID: \"42c91c94-d899-482d-a358-2c80842231e1\") " pod="calico-system/goldmane-7c778bb748-cqvt5" Nov 5 04:50:11.068583 kubelet[2814]: I1105 04:50:11.068556 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/745e342d-8061-4ed4-94ef-e773f5826da7-config-volume\") pod \"coredns-66bc5c9577-swdbh\" (UID: \"745e342d-8061-4ed4-94ef-e773f5826da7\") " pod="kube-system/coredns-66bc5c9577-swdbh" Nov 5 04:50:11.068583 kubelet[2814]: I1105 04:50:11.068571 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqc9\" (UniqueName: \"kubernetes.io/projected/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-kube-api-access-4gqc9\") pod \"whisker-74cbf7bd55-z2lfr\" (UID: \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\") " pod="calico-system/whisker-74cbf7bd55-z2lfr" Nov 5 04:50:11.068694 kubelet[2814]: I1105 04:50:11.068584 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1cae229-f08d-4b80-be11-a077ed5ab750-tigera-ca-bundle\") pod \"calico-kube-controllers-75b5b9c475-hhsfh\" (UID: \"c1cae229-f08d-4b80-be11-a077ed5ab750\") " pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" Nov 5 04:50:11.068694 kubelet[2814]: I1105 04:50:11.068599 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f45e524-4d73-4922-a936-2a28ed045d2c-calico-apiserver-certs\") pod \"calico-apiserver-5c9fdcbf84-drbnr\" (UID: \"6f45e524-4d73-4922-a936-2a28ed045d2c\") " pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" Nov 5 04:50:11.068694 kubelet[2814]: I1105 04:50:11.068611 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-ca-bundle\") pod \"whisker-74cbf7bd55-z2lfr\" (UID: \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\") " pod="calico-system/whisker-74cbf7bd55-z2lfr" Nov 5 04:50:11.068694 kubelet[2814]: I1105 04:50:11.068626 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42c91c94-d899-482d-a358-2c80842231e1-config\") pod \"goldmane-7c778bb748-cqvt5\" (UID: \"42c91c94-d899-482d-a358-2c80842231e1\") " pod="calico-system/goldmane-7c778bb748-cqvt5" Nov 5 04:50:11.414403 kubelet[2814]: E1105 04:50:11.414374 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:11.415023 containerd[1644]: time="2025-11-05T04:50:11.414957431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dw9s5,Uid:5c3560b7-8a2f-467b-a954-04af37b9cb69,Namespace:kube-system,Attempt:0,}" Nov 5 04:50:11.625434 kubelet[2814]: E1105 04:50:11.625340 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:11.626155 containerd[1644]: time="2025-11-05T04:50:11.626099477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-swdbh,Uid:745e342d-8061-4ed4-94ef-e773f5826da7,Namespace:kube-system,Attempt:0,}" Nov 5 04:50:11.628355 containerd[1644]: time="2025-11-05T04:50:11.628281699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-drbnr,Uid:6f45e524-4d73-4922-a936-2a28ed045d2c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:50:11.629945 containerd[1644]: time="2025-11-05T04:50:11.629899020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-kfpz4,Uid:28221097-2724-4244-b685-bd415dc30351,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:50:11.632166 containerd[1644]: time="2025-11-05T04:50:11.632136307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74cbf7bd55-z2lfr,Uid:6c9d1710-ea97-4e77-b4cf-83ba2c2c8004,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:11.633545 containerd[1644]: time="2025-11-05T04:50:11.633454495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cqvt5,Uid:42c91c94-d899-482d-a358-2c80842231e1,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:11.635044 containerd[1644]: time="2025-11-05T04:50:11.634990172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75b5b9c475-hhsfh,Uid:c1cae229-f08d-4b80-be11-a077ed5ab750,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:11.762443 containerd[1644]: time="2025-11-05T04:50:11.762262145Z" level=error msg="Failed to destroy network for sandbox \"e4e85a29845c6ec54662d2331f39fa0acf9bb524b2deb8b625a3267e2ca78102\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.768337 systemd[1]: run-netns-cni\x2d31c91373\x2d552b\x2db445\x2d2d21\x2d53cc511b8365.mount: Deactivated successfully. Nov 5 04:50:11.771956 containerd[1644]: time="2025-11-05T04:50:11.771332749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-drbnr,Uid:6f45e524-4d73-4922-a936-2a28ed045d2c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e85a29845c6ec54662d2331f39fa0acf9bb524b2deb8b625a3267e2ca78102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.772075 kubelet[2814]: E1105 04:50:11.771679 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e85a29845c6ec54662d2331f39fa0acf9bb524b2deb8b625a3267e2ca78102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.772075 kubelet[2814]: E1105 04:50:11.771824 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e85a29845c6ec54662d2331f39fa0acf9bb524b2deb8b625a3267e2ca78102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" Nov 5 04:50:11.772075 kubelet[2814]: E1105 04:50:11.771847 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4e85a29845c6ec54662d2331f39fa0acf9bb524b2deb8b625a3267e2ca78102\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" Nov 5 04:50:11.772166 kubelet[2814]: E1105 04:50:11.771916 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9fdcbf84-drbnr_calico-apiserver(6f45e524-4d73-4922-a936-2a28ed045d2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9fdcbf84-drbnr_calico-apiserver(6f45e524-4d73-4922-a936-2a28ed045d2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4e85a29845c6ec54662d2331f39fa0acf9bb524b2deb8b625a3267e2ca78102\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" podUID="6f45e524-4d73-4922-a936-2a28ed045d2c" Nov 5 04:50:11.790999 containerd[1644]: time="2025-11-05T04:50:11.790801762Z" level=error msg="Failed to destroy network for sandbox \"5707245cc045f8f8396b64525f47d7f728b7dea5bdac7c507d56034a36477906\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.797115 systemd[1]: run-netns-cni\x2d6abb0f51\x2d3a44\x2dc0af\x2ded24\x2d138f1a711742.mount: Deactivated successfully. Nov 5 04:50:11.798567 containerd[1644]: time="2025-11-05T04:50:11.797729978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-swdbh,Uid:745e342d-8061-4ed4-94ef-e773f5826da7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5707245cc045f8f8396b64525f47d7f728b7dea5bdac7c507d56034a36477906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.798675 kubelet[2814]: E1105 04:50:11.798002 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5707245cc045f8f8396b64525f47d7f728b7dea5bdac7c507d56034a36477906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.798675 kubelet[2814]: E1105 04:50:11.798064 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5707245cc045f8f8396b64525f47d7f728b7dea5bdac7c507d56034a36477906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-swdbh" Nov 5 04:50:11.798675 kubelet[2814]: E1105 04:50:11.798083 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5707245cc045f8f8396b64525f47d7f728b7dea5bdac7c507d56034a36477906\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-swdbh" Nov 5 04:50:11.798800 kubelet[2814]: E1105 04:50:11.798147 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-swdbh_kube-system(745e342d-8061-4ed4-94ef-e773f5826da7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-swdbh_kube-system(745e342d-8061-4ed4-94ef-e773f5826da7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5707245cc045f8f8396b64525f47d7f728b7dea5bdac7c507d56034a36477906\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-swdbh" podUID="745e342d-8061-4ed4-94ef-e773f5826da7" Nov 5 04:50:11.807482 containerd[1644]: time="2025-11-05T04:50:11.807405929Z" level=error msg="Failed to destroy network for sandbox \"461e52616bc310117ff417dbf4f5529a70e89083e24a30459af1e41ddad045d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.810331 containerd[1644]: time="2025-11-05T04:50:11.810278600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-74cbf7bd55-z2lfr,Uid:6c9d1710-ea97-4e77-b4cf-83ba2c2c8004,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"461e52616bc310117ff417dbf4f5529a70e89083e24a30459af1e41ddad045d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.810723 systemd[1]: run-netns-cni\x2d6d9dc7e2\x2d094a\x2d500f\x2de100\x2d6b1ce19ab4f9.mount: Deactivated successfully. Nov 5 04:50:11.811338 kubelet[2814]: E1105 04:50:11.811071 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461e52616bc310117ff417dbf4f5529a70e89083e24a30459af1e41ddad045d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.811338 kubelet[2814]: E1105 04:50:11.811130 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461e52616bc310117ff417dbf4f5529a70e89083e24a30459af1e41ddad045d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74cbf7bd55-z2lfr" Nov 5 04:50:11.811338 kubelet[2814]: E1105 04:50:11.811151 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461e52616bc310117ff417dbf4f5529a70e89083e24a30459af1e41ddad045d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-74cbf7bd55-z2lfr" Nov 5 04:50:11.811437 kubelet[2814]: E1105 04:50:11.811218 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-74cbf7bd55-z2lfr_calico-system(6c9d1710-ea97-4e77-b4cf-83ba2c2c8004)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-74cbf7bd55-z2lfr_calico-system(6c9d1710-ea97-4e77-b4cf-83ba2c2c8004)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"461e52616bc310117ff417dbf4f5529a70e89083e24a30459af1e41ddad045d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-74cbf7bd55-z2lfr" podUID="6c9d1710-ea97-4e77-b4cf-83ba2c2c8004" Nov 5 04:50:11.815937 containerd[1644]: time="2025-11-05T04:50:11.815865043Z" level=error msg="Failed to destroy network for sandbox \"06821d0905e33be889c79bf5f720842e663f1b46151d3a7e23099bea364d8d39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.819590 systemd[1]: run-netns-cni\x2d411843f4\x2d7bc9\x2d3d72\x2dae04\x2d37c30ffdb8ef.mount: Deactivated successfully. Nov 5 04:50:11.822290 containerd[1644]: time="2025-11-05T04:50:11.822237544Z" level=error msg="Failed to destroy network for sandbox \"c0afd0e167f56c64638eedcfb6e042b69a965ef5a011ece983e05fef2ed6cf35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.823562 containerd[1644]: time="2025-11-05T04:50:11.823420148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cqvt5,Uid:42c91c94-d899-482d-a358-2c80842231e1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06821d0905e33be889c79bf5f720842e663f1b46151d3a7e23099bea364d8d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.823926 kubelet[2814]: E1105 04:50:11.823851 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06821d0905e33be889c79bf5f720842e663f1b46151d3a7e23099bea364d8d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.824666 kubelet[2814]: E1105 04:50:11.823925 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06821d0905e33be889c79bf5f720842e663f1b46151d3a7e23099bea364d8d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-cqvt5" Nov 5 04:50:11.824666 kubelet[2814]: E1105 04:50:11.823949 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06821d0905e33be889c79bf5f720842e663f1b46151d3a7e23099bea364d8d39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-cqvt5" Nov 5 04:50:11.824666 kubelet[2814]: E1105 04:50:11.824052 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-cqvt5_calico-system(42c91c94-d899-482d-a358-2c80842231e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-cqvt5_calico-system(42c91c94-d899-482d-a358-2c80842231e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06821d0905e33be889c79bf5f720842e663f1b46151d3a7e23099bea364d8d39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-cqvt5" podUID="42c91c94-d899-482d-a358-2c80842231e1" Nov 5 04:50:11.826568 containerd[1644]: time="2025-11-05T04:50:11.826450865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dw9s5,Uid:5c3560b7-8a2f-467b-a954-04af37b9cb69,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0afd0e167f56c64638eedcfb6e042b69a965ef5a011ece983e05fef2ed6cf35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.826694 kubelet[2814]: E1105 04:50:11.826630 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0afd0e167f56c64638eedcfb6e042b69a965ef5a011ece983e05fef2ed6cf35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.826694 kubelet[2814]: E1105 04:50:11.826665 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0afd0e167f56c64638eedcfb6e042b69a965ef5a011ece983e05fef2ed6cf35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dw9s5" Nov 5 04:50:11.826827 kubelet[2814]: E1105 04:50:11.826699 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0afd0e167f56c64638eedcfb6e042b69a965ef5a011ece983e05fef2ed6cf35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dw9s5" Nov 5 04:50:11.826827 kubelet[2814]: E1105 04:50:11.826783 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dw9s5_kube-system(5c3560b7-8a2f-467b-a954-04af37b9cb69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dw9s5_kube-system(5c3560b7-8a2f-467b-a954-04af37b9cb69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0afd0e167f56c64638eedcfb6e042b69a965ef5a011ece983e05fef2ed6cf35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dw9s5" podUID="5c3560b7-8a2f-467b-a954-04af37b9cb69" Nov 5 04:50:11.827034 containerd[1644]: time="2025-11-05T04:50:11.827008244Z" level=error msg="Failed to destroy network for sandbox \"4492806520c79825c60b3ea157c79d6c78b0551d0c994ecd187f4ab98e83d415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.828526 containerd[1644]: time="2025-11-05T04:50:11.828476284Z" level=error msg="Failed to destroy network for sandbox \"418a28e5f86db03a801ba384ac71f6b6f872c87c1bc149ca90fdc8451c94ad40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.829403 containerd[1644]: time="2025-11-05T04:50:11.829360516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75b5b9c475-hhsfh,Uid:c1cae229-f08d-4b80-be11-a077ed5ab750,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4492806520c79825c60b3ea157c79d6c78b0551d0c994ecd187f4ab98e83d415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.829728 kubelet[2814]: E1105 04:50:11.829681 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4492806520c79825c60b3ea157c79d6c78b0551d0c994ecd187f4ab98e83d415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.830038 kubelet[2814]: E1105 04:50:11.829861 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4492806520c79825c60b3ea157c79d6c78b0551d0c994ecd187f4ab98e83d415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" Nov 5 04:50:11.830038 kubelet[2814]: E1105 04:50:11.829969 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4492806520c79825c60b3ea157c79d6c78b0551d0c994ecd187f4ab98e83d415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" Nov 5 04:50:11.830304 kubelet[2814]: E1105 04:50:11.830250 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75b5b9c475-hhsfh_calico-system(c1cae229-f08d-4b80-be11-a077ed5ab750)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75b5b9c475-hhsfh_calico-system(c1cae229-f08d-4b80-be11-a077ed5ab750)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4492806520c79825c60b3ea157c79d6c78b0551d0c994ecd187f4ab98e83d415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:50:11.831499 containerd[1644]: time="2025-11-05T04:50:11.831441238Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-kfpz4,Uid:28221097-2724-4244-b685-bd415dc30351,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"418a28e5f86db03a801ba384ac71f6b6f872c87c1bc149ca90fdc8451c94ad40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.831696 kubelet[2814]: E1105 04:50:11.831652 2814 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418a28e5f86db03a801ba384ac71f6b6f872c87c1bc149ca90fdc8451c94ad40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 04:50:11.831819 kubelet[2814]: E1105 04:50:11.831702 2814 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418a28e5f86db03a801ba384ac71f6b6f872c87c1bc149ca90fdc8451c94ad40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" Nov 5 04:50:11.831819 kubelet[2814]: E1105 04:50:11.831796 2814 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"418a28e5f86db03a801ba384ac71f6b6f872c87c1bc149ca90fdc8451c94ad40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" Nov 5 04:50:11.831906 kubelet[2814]: E1105 04:50:11.831844 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5c9fdcbf84-kfpz4_calico-apiserver(28221097-2724-4244-b685-bd415dc30351)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5c9fdcbf84-kfpz4_calico-apiserver(28221097-2724-4244-b685-bd415dc30351)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"418a28e5f86db03a801ba384ac71f6b6f872c87c1bc149ca90fdc8451c94ad40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:50:12.691844 systemd[1]: run-netns-cni\x2d19fa6095\x2d2d8c\x2d43ad\x2d3b9a\x2dcb9fe2cafbcd.mount: Deactivated successfully. Nov 5 04:50:12.691972 systemd[1]: run-netns-cni\x2dd6ec21d3\x2de40b\x2dc4ef\x2d14b6\x2df85e664a2c20.mount: Deactivated successfully. Nov 5 04:50:12.692060 systemd[1]: run-netns-cni\x2d907c2b2a\x2dfe26\x2d05ef\x2da3ca\x2d8b0ceeb51849.mount: Deactivated successfully. Nov 5 04:50:18.991640 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:53918.service - OpenSSH per-connection server daemon (10.0.0.1:53918). Nov 5 04:50:19.062863 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 53918 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:19.064824 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:19.070757 systemd-logind[1625]: New session 10 of user core. Nov 5 04:50:19.076862 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 04:50:19.402696 sshd[3853]: Connection closed by 10.0.0.1 port 53918 Nov 5 04:50:19.402973 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:19.410073 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:53918.service: Deactivated successfully. Nov 5 04:50:19.412872 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 04:50:19.415678 systemd-logind[1625]: Session 10 logged out. Waiting for processes to exit. Nov 5 04:50:19.417412 systemd-logind[1625]: Removed session 10. Nov 5 04:50:20.251826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893370261.mount: Deactivated successfully. Nov 5 04:50:21.342456 containerd[1644]: time="2025-11-05T04:50:21.342375675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:21.356603 containerd[1644]: time="2025-11-05T04:50:21.343284462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156880025" Nov 5 04:50:21.356603 containerd[1644]: time="2025-11-05T04:50:21.344587319Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:21.356751 containerd[1644]: time="2025-11-05T04:50:21.346944846Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.362023014s" Nov 5 04:50:21.356787 containerd[1644]: time="2025-11-05T04:50:21.356755809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 04:50:21.357149 containerd[1644]: time="2025-11-05T04:50:21.357120444Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 04:50:21.375113 containerd[1644]: time="2025-11-05T04:50:21.375053969Z" level=info msg="CreateContainer within sandbox \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 04:50:21.389661 containerd[1644]: time="2025-11-05T04:50:21.389568376Z" level=info msg="Container e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:50:21.398297 containerd[1644]: time="2025-11-05T04:50:21.398256641Z" level=info msg="CreateContainer within sandbox \"b23446184562c22fef7e0cc9ca077e817d66d0d95fcd1b4e6b040cd1993c2e16\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716\"" Nov 5 04:50:21.399765 containerd[1644]: time="2025-11-05T04:50:21.398871005Z" level=info msg="StartContainer for \"e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716\"" Nov 5 04:50:21.400367 containerd[1644]: time="2025-11-05T04:50:21.400345062Z" level=info msg="connecting to shim e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716" address="unix:///run/containerd/s/1517526321b6e52cb27085e09aa47a03158f5a283663e38384f1ffeb87999a69" protocol=ttrpc version=3 Nov 5 04:50:21.417894 systemd[1]: Started cri-containerd-e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716.scope - libcontainer container e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716. Nov 5 04:50:21.463393 containerd[1644]: time="2025-11-05T04:50:21.463350306Z" level=info msg="StartContainer for \"e496729dd7e637f27a1be68f718a2aa4177523ae5be4d3f3ad37c622c551e716\" returns successfully" Nov 5 04:50:21.537680 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 04:50:21.538804 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 04:50:21.734409 kubelet[2814]: I1105 04:50:21.734341 2814 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-backend-key-pair\") pod \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\" (UID: \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\") " Nov 5 04:50:21.734409 kubelet[2814]: I1105 04:50:21.734411 2814 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gqc9\" (UniqueName: \"kubernetes.io/projected/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-kube-api-access-4gqc9\") pod \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\" (UID: \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\") " Nov 5 04:50:21.734950 kubelet[2814]: I1105 04:50:21.734429 2814 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-ca-bundle\") pod \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\" (UID: \"6c9d1710-ea97-4e77-b4cf-83ba2c2c8004\") " Nov 5 04:50:21.735064 kubelet[2814]: I1105 04:50:21.735029 2814 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6c9d1710-ea97-4e77-b4cf-83ba2c2c8004" (UID: "6c9d1710-ea97-4e77-b4cf-83ba2c2c8004"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 04:50:21.738978 kubelet[2814]: I1105 04:50:21.738915 2814 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6c9d1710-ea97-4e77-b4cf-83ba2c2c8004" (UID: "6c9d1710-ea97-4e77-b4cf-83ba2c2c8004"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 04:50:21.739420 kubelet[2814]: I1105 04:50:21.739388 2814 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-kube-api-access-4gqc9" (OuterVolumeSpecName: "kube-api-access-4gqc9") pod "6c9d1710-ea97-4e77-b4cf-83ba2c2c8004" (UID: "6c9d1710-ea97-4e77-b4cf-83ba2c2c8004"). InnerVolumeSpecName "kube-api-access-4gqc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 04:50:21.835028 kubelet[2814]: I1105 04:50:21.834981 2814 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4gqc9\" (UniqueName: \"kubernetes.io/projected/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-kube-api-access-4gqc9\") on node \"localhost\" DevicePath \"\"" Nov 5 04:50:21.835028 kubelet[2814]: I1105 04:50:21.835014 2814 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 04:50:21.835028 kubelet[2814]: I1105 04:50:21.835023 2814 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 04:50:21.914409 containerd[1644]: time="2025-11-05T04:50:21.914336686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zml8c,Uid:f4956568-400e-4c71-8a7d-11217f3b2032,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:22.033101 kubelet[2814]: E1105 04:50:22.032920 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:22.040690 systemd[1]: Removed slice kubepods-besteffort-pod6c9d1710_ea97_4e77_b4cf_83ba2c2c8004.slice - libcontainer container kubepods-besteffort-pod6c9d1710_ea97_4e77_b4cf_83ba2c2c8004.slice. Nov 5 04:50:22.067127 kubelet[2814]: I1105 04:50:22.067059 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d7mkr" podStartSLOduration=2.225180997 podStartE2EDuration="24.067039547s" podCreationTimestamp="2025-11-05 04:49:58 +0000 UTC" firstStartedPulling="2025-11-05 04:49:59.515860948 +0000 UTC m=+20.756169711" lastFinishedPulling="2025-11-05 04:50:21.357719498 +0000 UTC m=+42.598028261" observedRunningTime="2025-11-05 04:50:22.056463979 +0000 UTC m=+43.296772742" watchObservedRunningTime="2025-11-05 04:50:22.067039547 +0000 UTC m=+43.307348310" Nov 5 04:50:22.107275 systemd[1]: Created slice kubepods-besteffort-podb3d93d76_fadd_41a3_bad6_e61b29990155.slice - libcontainer container kubepods-besteffort-podb3d93d76_fadd_41a3_bad6_e61b29990155.slice. Nov 5 04:50:22.136231 systemd-networkd[1534]: calie079cd26daa: Link UP Nov 5 04:50:22.136927 systemd-networkd[1534]: calie079cd26daa: Gained carrier Nov 5 04:50:22.151898 containerd[1644]: 2025-11-05 04:50:21.978 [INFO][3929] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:50:22.151898 containerd[1644]: 2025-11-05 04:50:21.997 [INFO][3929] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--zml8c-eth0 csi-node-driver- calico-system f4956568-400e-4c71-8a7d-11217f3b2032 723 0 2025-11-05 04:49:59 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-zml8c eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie079cd26daa [] [] }} ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-" Nov 5 04:50:22.151898 containerd[1644]: 2025-11-05 04:50:21.997 [INFO][3929] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.151898 containerd[1644]: 2025-11-05 04:50:22.078 [INFO][3948] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" HandleID="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Workload="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.078 [INFO][3948] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" HandleID="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Workload="localhost-k8s-csi--node--driver--zml8c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005920e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-zml8c", "timestamp":"2025-11-05 04:50:22.078077542 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.078 [INFO][3948] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.079 [INFO][3948] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.079 [INFO][3948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.091 [INFO][3948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" host="localhost" Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.100 [INFO][3948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.107 [INFO][3948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.110 [INFO][3948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.113 [INFO][3948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:22.152119 containerd[1644]: 2025-11-05 04:50:22.113 [INFO][3948] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" host="localhost" Nov 5 04:50:22.152326 containerd[1644]: 2025-11-05 04:50:22.115 [INFO][3948] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2 Nov 5 04:50:22.152326 containerd[1644]: 2025-11-05 04:50:22.120 [INFO][3948] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" host="localhost" Nov 5 04:50:22.152326 containerd[1644]: 2025-11-05 04:50:22.125 [INFO][3948] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" host="localhost" Nov 5 04:50:22.152326 containerd[1644]: 2025-11-05 04:50:22.125 [INFO][3948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" host="localhost" Nov 5 04:50:22.152326 containerd[1644]: 2025-11-05 04:50:22.125 [INFO][3948] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:22.152326 containerd[1644]: 2025-11-05 04:50:22.125 [INFO][3948] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" HandleID="k8s-pod-network.9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Workload="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.152456 containerd[1644]: 2025-11-05 04:50:22.128 [INFO][3929] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zml8c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f4956568-400e-4c71-8a7d-11217f3b2032", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-zml8c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie079cd26daa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:22.152527 containerd[1644]: 2025-11-05 04:50:22.129 [INFO][3929] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.152527 containerd[1644]: 2025-11-05 04:50:22.129 [INFO][3929] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie079cd26daa ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.152527 containerd[1644]: 2025-11-05 04:50:22.136 [INFO][3929] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.152591 containerd[1644]: 2025-11-05 04:50:22.137 [INFO][3929] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--zml8c-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f4956568-400e-4c71-8a7d-11217f3b2032", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2", Pod:"csi-node-driver-zml8c", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie079cd26daa", MAC:"ca:d0:00:fe:9f:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:22.152643 containerd[1644]: 2025-11-05 04:50:22.148 [INFO][3929] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" Namespace="calico-system" Pod="csi-node-driver-zml8c" WorkloadEndpoint="localhost-k8s-csi--node--driver--zml8c-eth0" Nov 5 04:50:22.238387 kubelet[2814]: I1105 04:50:22.238249 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3d93d76-fadd-41a3-bad6-e61b29990155-whisker-ca-bundle\") pod \"whisker-57fcc4ffd7-8h8nc\" (UID: \"b3d93d76-fadd-41a3-bad6-e61b29990155\") " pod="calico-system/whisker-57fcc4ffd7-8h8nc" Nov 5 04:50:22.238387 kubelet[2814]: I1105 04:50:22.238298 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b3d93d76-fadd-41a3-bad6-e61b29990155-whisker-backend-key-pair\") pod \"whisker-57fcc4ffd7-8h8nc\" (UID: \"b3d93d76-fadd-41a3-bad6-e61b29990155\") " pod="calico-system/whisker-57fcc4ffd7-8h8nc" Nov 5 04:50:22.238387 kubelet[2814]: I1105 04:50:22.238316 2814 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7t48\" (UniqueName: \"kubernetes.io/projected/b3d93d76-fadd-41a3-bad6-e61b29990155-kube-api-access-c7t48\") pod \"whisker-57fcc4ffd7-8h8nc\" (UID: \"b3d93d76-fadd-41a3-bad6-e61b29990155\") " pod="calico-system/whisker-57fcc4ffd7-8h8nc" Nov 5 04:50:22.262933 containerd[1644]: time="2025-11-05T04:50:22.262871668Z" level=info msg="connecting to shim 9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2" address="unix:///run/containerd/s/c1c028316dcfc92ec50c8d335e18d558673e10a94ea618af81b73e698fa74686" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:22.284892 systemd[1]: Started cri-containerd-9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2.scope - libcontainer container 9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2. Nov 5 04:50:22.297982 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:22.313554 containerd[1644]: time="2025-11-05T04:50:22.313493880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zml8c,Uid:f4956568-400e-4c71-8a7d-11217f3b2032,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bd55b80896d3c06d25768816b7aa1104f0a2b812b7f2ce84dd676401d5ce9f2\"" Nov 5 04:50:22.315023 containerd[1644]: time="2025-11-05T04:50:22.314998436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:50:22.367097 systemd[1]: var-lib-kubelet-pods-6c9d1710\x2dea97\x2d4e77\x2db4cf\x2d83ba2c2c8004-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4gqc9.mount: Deactivated successfully. Nov 5 04:50:22.367212 systemd[1]: var-lib-kubelet-pods-6c9d1710\x2dea97\x2d4e77\x2db4cf\x2d83ba2c2c8004-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 04:50:22.419577 containerd[1644]: time="2025-11-05T04:50:22.419508532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fcc4ffd7-8h8nc,Uid:b3d93d76-fadd-41a3-bad6-e61b29990155,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:22.534821 systemd-networkd[1534]: cali9588cda3018: Link UP Nov 5 04:50:22.535043 systemd-networkd[1534]: cali9588cda3018: Gained carrier Nov 5 04:50:22.549252 containerd[1644]: 2025-11-05 04:50:22.459 [INFO][4016] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 04:50:22.549252 containerd[1644]: 2025-11-05 04:50:22.473 [INFO][4016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0 whisker-57fcc4ffd7- calico-system b3d93d76-fadd-41a3-bad6-e61b29990155 960 0 2025-11-05 04:50:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:57fcc4ffd7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-57fcc4ffd7-8h8nc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9588cda3018 [] [] }} ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-" Nov 5 04:50:22.549252 containerd[1644]: 2025-11-05 04:50:22.473 [INFO][4016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.549252 containerd[1644]: 2025-11-05 04:50:22.499 [INFO][4031] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" HandleID="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Workload="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.499 [INFO][4031] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" HandleID="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Workload="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-57fcc4ffd7-8h8nc", "timestamp":"2025-11-05 04:50:22.499139013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.499 [INFO][4031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.499 [INFO][4031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.499 [INFO][4031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.506 [INFO][4031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" host="localhost" Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.511 [INFO][4031] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.515 [INFO][4031] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.517 [INFO][4031] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.519 [INFO][4031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:22.549544 containerd[1644]: 2025-11-05 04:50:22.519 [INFO][4031] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" host="localhost" Nov 5 04:50:22.549825 containerd[1644]: 2025-11-05 04:50:22.520 [INFO][4031] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3 Nov 5 04:50:22.549825 containerd[1644]: 2025-11-05 04:50:22.523 [INFO][4031] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" host="localhost" Nov 5 04:50:22.549825 containerd[1644]: 2025-11-05 04:50:22.529 [INFO][4031] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" host="localhost" Nov 5 04:50:22.549825 containerd[1644]: 2025-11-05 04:50:22.529 [INFO][4031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" host="localhost" Nov 5 04:50:22.549825 containerd[1644]: 2025-11-05 04:50:22.529 [INFO][4031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:22.549825 containerd[1644]: 2025-11-05 04:50:22.529 [INFO][4031] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" HandleID="k8s-pod-network.5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Workload="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.549960 containerd[1644]: 2025-11-05 04:50:22.532 [INFO][4016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0", GenerateName:"whisker-57fcc4ffd7-", Namespace:"calico-system", SelfLink:"", UID:"b3d93d76-fadd-41a3-bad6-e61b29990155", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fcc4ffd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-57fcc4ffd7-8h8nc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9588cda3018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:22.549960 containerd[1644]: 2025-11-05 04:50:22.532 [INFO][4016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.550036 containerd[1644]: 2025-11-05 04:50:22.532 [INFO][4016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9588cda3018 ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.550036 containerd[1644]: 2025-11-05 04:50:22.535 [INFO][4016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.550090 containerd[1644]: 2025-11-05 04:50:22.536 [INFO][4016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0", GenerateName:"whisker-57fcc4ffd7-", Namespace:"calico-system", SelfLink:"", UID:"b3d93d76-fadd-41a3-bad6-e61b29990155", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 50, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"57fcc4ffd7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3", Pod:"whisker-57fcc4ffd7-8h8nc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9588cda3018", MAC:"1e:23:96:86:e4:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:22.550139 containerd[1644]: 2025-11-05 04:50:22.545 [INFO][4016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" Namespace="calico-system" Pod="whisker-57fcc4ffd7-8h8nc" WorkloadEndpoint="localhost-k8s-whisker--57fcc4ffd7--8h8nc-eth0" Nov 5 04:50:22.647250 containerd[1644]: time="2025-11-05T04:50:22.647184946Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:22.834149 containerd[1644]: time="2025-11-05T04:50:22.833899607Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:50:22.834149 containerd[1644]: time="2025-11-05T04:50:22.834001208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:22.834297 kubelet[2814]: E1105 04:50:22.834169 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:50:22.834297 kubelet[2814]: E1105 04:50:22.834243 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:50:22.834703 kubelet[2814]: E1105 04:50:22.834344 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:22.835444 containerd[1644]: time="2025-11-05T04:50:22.835333349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:50:22.854442 containerd[1644]: time="2025-11-05T04:50:22.854392777Z" level=info msg="connecting to shim 5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3" address="unix:///run/containerd/s/73d507f8fa9cf78b10f376d357d80c193441f9a28e81dcfabcbff83921e43e0e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:22.862140 kubelet[2814]: I1105 04:50:22.861800 2814 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c9d1710-ea97-4e77-b4cf-83ba2c2c8004" path="/var/lib/kubelet/pods/6c9d1710-ea97-4e77-b4cf-83ba2c2c8004/volumes" Nov 5 04:50:22.881892 systemd[1]: Started cri-containerd-5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3.scope - libcontainer container 5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3. Nov 5 04:50:22.894902 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:22.924869 containerd[1644]: time="2025-11-05T04:50:22.924830356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fcc4ffd7-8h8nc,Uid:b3d93d76-fadd-41a3-bad6-e61b29990155,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c1af9f166ffa0cd146edb0de36f90f21e0665ea9f58d4e489b8aff03eb5edf3\"" Nov 5 04:50:23.040501 kubelet[2814]: E1105 04:50:23.040447 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:23.161986 containerd[1644]: time="2025-11-05T04:50:23.161915884Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:23.166758 containerd[1644]: time="2025-11-05T04:50:23.166616239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:50:23.166758 containerd[1644]: time="2025-11-05T04:50:23.166722870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:23.167155 kubelet[2814]: E1105 04:50:23.167025 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:50:23.167155 kubelet[2814]: E1105 04:50:23.167089 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:50:23.167712 kubelet[2814]: E1105 04:50:23.167280 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:23.167712 kubelet[2814]: E1105 04:50:23.167321 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:23.167979 containerd[1644]: time="2025-11-05T04:50:23.167855536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:50:23.184166 kubelet[2814]: I1105 04:50:23.184113 2814 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 5 04:50:23.185663 kubelet[2814]: E1105 04:50:23.185630 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:23.513826 containerd[1644]: time="2025-11-05T04:50:23.513687109Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:23.514934 containerd[1644]: time="2025-11-05T04:50:23.514879397Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:50:23.515097 containerd[1644]: time="2025-11-05T04:50:23.514951732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:23.515234 kubelet[2814]: E1105 04:50:23.515187 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:50:23.515324 kubelet[2814]: E1105 04:50:23.515241 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:50:23.515351 kubelet[2814]: E1105 04:50:23.515325 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57fcc4ffd7-8h8nc_calico-system(b3d93d76-fadd-41a3-bad6-e61b29990155): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:23.516336 containerd[1644]: time="2025-11-05T04:50:23.516304292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:50:23.571797 systemd-networkd[1534]: vxlan.calico: Link UP Nov 5 04:50:23.571817 systemd-networkd[1534]: vxlan.calico: Gained carrier Nov 5 04:50:23.756980 systemd-networkd[1534]: cali9588cda3018: Gained IPv6LL Nov 5 04:50:23.865392 kubelet[2814]: E1105 04:50:23.865043 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:23.866964 containerd[1644]: time="2025-11-05T04:50:23.866582884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dw9s5,Uid:5c3560b7-8a2f-467b-a954-04af37b9cb69,Namespace:kube-system,Attempt:0,}" Nov 5 04:50:23.869656 containerd[1644]: time="2025-11-05T04:50:23.869545385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-kfpz4,Uid:28221097-2724-4244-b685-bd415dc30351,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:50:23.871096 containerd[1644]: time="2025-11-05T04:50:23.871048798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cqvt5,Uid:42c91c94-d899-482d-a358-2c80842231e1,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:23.895047 containerd[1644]: time="2025-11-05T04:50:23.894955334Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:23.898984 containerd[1644]: time="2025-11-05T04:50:23.898906692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:50:23.898984 containerd[1644]: time="2025-11-05T04:50:23.898949853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:23.900021 kubelet[2814]: E1105 04:50:23.899715 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:50:23.900113 kubelet[2814]: E1105 04:50:23.900038 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:50:23.901254 kubelet[2814]: E1105 04:50:23.900174 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57fcc4ffd7-8h8nc_calico-system(b3d93d76-fadd-41a3-bad6-e61b29990155): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:23.901315 kubelet[2814]: E1105 04:50:23.901283 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57fcc4ffd7-8h8nc" podUID="b3d93d76-fadd-41a3-bad6-e61b29990155" Nov 5 04:50:24.033578 systemd-networkd[1534]: calic0434744d32: Link UP Nov 5 04:50:24.034815 systemd-networkd[1534]: calic0434744d32: Gained carrier Nov 5 04:50:24.049323 kubelet[2814]: E1105 04:50:24.049272 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:24.050646 kubelet[2814]: E1105 04:50:24.050604 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:24.051266 kubelet[2814]: E1105 04:50:24.051212 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57fcc4ffd7-8h8nc" podUID="b3d93d76-fadd-41a3-bad6-e61b29990155" Nov 5 04:50:24.055412 kubelet[2814]: E1105 04:50:24.055312 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:24.066775 containerd[1644]: 2025-11-05 04:50:23.949 [INFO][4309] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dw9s5-eth0 coredns-66bc5c9577- kube-system 5c3560b7-8a2f-467b-a954-04af37b9cb69 832 0 2025-11-05 04:49:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dw9s5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic0434744d32 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-" Nov 5 04:50:24.066775 containerd[1644]: 2025-11-05 04:50:23.950 [INFO][4309] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.066775 containerd[1644]: 2025-11-05 04:50:23.986 [INFO][4367] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" HandleID="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Workload="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:23.987 [INFO][4367] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" HandleID="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Workload="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad390), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dw9s5", "timestamp":"2025-11-05 04:50:23.986939328 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:23.987 [INFO][4367] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:23.987 [INFO][4367] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:23.987 [INFO][4367] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:23.994 [INFO][4367] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" host="localhost" Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:24.002 [INFO][4367] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:24.011 [INFO][4367] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:24.013 [INFO][4367] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:24.015 [INFO][4367] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.067042 containerd[1644]: 2025-11-05 04:50:24.015 [INFO][4367] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" host="localhost" Nov 5 04:50:24.067505 containerd[1644]: 2025-11-05 04:50:24.016 [INFO][4367] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015 Nov 5 04:50:24.067505 containerd[1644]: 2025-11-05 04:50:24.020 [INFO][4367] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" host="localhost" Nov 5 04:50:24.067505 containerd[1644]: 2025-11-05 04:50:24.025 [INFO][4367] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" host="localhost" Nov 5 04:50:24.067505 containerd[1644]: 2025-11-05 04:50:24.025 [INFO][4367] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" host="localhost" Nov 5 04:50:24.067505 containerd[1644]: 2025-11-05 04:50:24.025 [INFO][4367] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:24.067505 containerd[1644]: 2025-11-05 04:50:24.025 [INFO][4367] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" HandleID="k8s-pod-network.b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Workload="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.067665 containerd[1644]: 2025-11-05 04:50:24.029 [INFO][4309] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dw9s5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5c3560b7-8a2f-467b-a954-04af37b9cb69", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dw9s5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0434744d32", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.067665 containerd[1644]: 2025-11-05 04:50:24.030 [INFO][4309] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.067665 containerd[1644]: 2025-11-05 04:50:24.030 [INFO][4309] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic0434744d32 ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.067665 containerd[1644]: 2025-11-05 04:50:24.035 [INFO][4309] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.067665 containerd[1644]: 2025-11-05 04:50:24.037 [INFO][4309] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dw9s5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"5c3560b7-8a2f-467b-a954-04af37b9cb69", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015", Pod:"coredns-66bc5c9577-dw9s5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic0434744d32", MAC:"26:74:13:e7:64:ea", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.067665 containerd[1644]: 2025-11-05 04:50:24.057 [INFO][4309] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" Namespace="kube-system" Pod="coredns-66bc5c9577-dw9s5" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dw9s5-eth0" Nov 5 04:50:24.079626 systemd-networkd[1534]: calie079cd26daa: Gained IPv6LL Nov 5 04:50:24.376573 containerd[1644]: time="2025-11-05T04:50:24.376522033Z" level=info msg="connecting to shim b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015" address="unix:///run/containerd/s/88de3560f569e4634691ae4f2f2e55bfdcff53119eb4c2fca76ce744c6dde8ec" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:24.390934 systemd-networkd[1534]: cali4fa31ca23e6: Link UP Nov 5 04:50:24.392766 systemd-networkd[1534]: cali4fa31ca23e6: Gained carrier Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:23.932 [INFO][4327] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0 calico-apiserver-5c9fdcbf84- calico-apiserver 28221097-2724-4244-b685-bd415dc30351 843 0 2025-11-05 04:49:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c9fdcbf84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c9fdcbf84-kfpz4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4fa31ca23e6 [] [] }} ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:23.932 [INFO][4327] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:23.989 [INFO][4360] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" HandleID="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Workload="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:23.990 [INFO][4360] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" HandleID="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Workload="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c9fdcbf84-kfpz4", "timestamp":"2025-11-05 04:50:23.989814495 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:23.990 [INFO][4360] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.025 [INFO][4360] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.025 [INFO][4360] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.100 [INFO][4360] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.343 [INFO][4360] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.358 [INFO][4360] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.361 [INFO][4360] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.364 [INFO][4360] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.364 [INFO][4360] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.366 [INFO][4360] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.370 [INFO][4360] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.378 [INFO][4360] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.378 [INFO][4360] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" host="localhost" Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.379 [INFO][4360] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:24.408535 containerd[1644]: 2025-11-05 04:50:24.379 [INFO][4360] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" HandleID="k8s-pod-network.d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Workload="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.409133 containerd[1644]: 2025-11-05 04:50:24.384 [INFO][4327] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0", GenerateName:"calico-apiserver-5c9fdcbf84-", Namespace:"calico-apiserver", SelfLink:"", UID:"28221097-2724-4244-b685-bd415dc30351", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9fdcbf84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c9fdcbf84-kfpz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fa31ca23e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.409133 containerd[1644]: 2025-11-05 04:50:24.384 [INFO][4327] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.409133 containerd[1644]: 2025-11-05 04:50:24.384 [INFO][4327] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fa31ca23e6 ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.409133 containerd[1644]: 2025-11-05 04:50:24.395 [INFO][4327] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.409133 containerd[1644]: 2025-11-05 04:50:24.396 [INFO][4327] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0", GenerateName:"calico-apiserver-5c9fdcbf84-", Namespace:"calico-apiserver", SelfLink:"", UID:"28221097-2724-4244-b685-bd415dc30351", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9fdcbf84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f", Pod:"calico-apiserver-5c9fdcbf84-kfpz4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4fa31ca23e6", MAC:"7e:6e:2b:88:38:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.409133 containerd[1644]: 2025-11-05 04:50:24.405 [INFO][4327] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-kfpz4" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--kfpz4-eth0" Nov 5 04:50:24.417800 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:60136.service - OpenSSH per-connection server daemon (10.0.0.1:60136). Nov 5 04:50:24.428906 systemd[1]: Started cri-containerd-b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015.scope - libcontainer container b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015. Nov 5 04:50:24.451963 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:24.452964 containerd[1644]: time="2025-11-05T04:50:24.452818662Z" level=info msg="connecting to shim d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f" address="unix:///run/containerd/s/1a613dfa1a1c24f7bc4ee78eed167288cb19aa2b54ea9c78a967693086b738a6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:24.488434 systemd[1]: Started cri-containerd-d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f.scope - libcontainer container d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f. Nov 5 04:50:24.488837 sshd[4468]: Accepted publickey for core from 10.0.0.1 port 60136 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:24.492032 sshd-session[4468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:24.516457 systemd-logind[1625]: New session 11 of user core. Nov 5 04:50:24.522776 systemd-networkd[1534]: cali157ee02580d: Link UP Nov 5 04:50:24.522976 systemd-networkd[1534]: cali157ee02580d: Gained carrier Nov 5 04:50:24.531654 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:24.539940 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 04:50:24.542387 containerd[1644]: time="2025-11-05T04:50:24.542341732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dw9s5,Uid:5c3560b7-8a2f-467b-a954-04af37b9cb69,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015\"" Nov 5 04:50:24.547389 kubelet[2814]: E1105 04:50:24.547358 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:24.561779 containerd[1644]: time="2025-11-05T04:50:24.561555375Z" level=info msg="CreateContainer within sandbox \"b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:23.959 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--cqvt5-eth0 goldmane-7c778bb748- calico-system 42c91c94-d899-482d-a358-2c80842231e1 841 0 2025-11-05 04:49:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-cqvt5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali157ee02580d [] [] }} ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:23.959 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.007 [INFO][4375] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" HandleID="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Workload="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.007 [INFO][4375] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" HandleID="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Workload="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005963a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-cqvt5", "timestamp":"2025-11-05 04:50:24.007049765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.007 [INFO][4375] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.379 [INFO][4375] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.379 [INFO][4375] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.386 [INFO][4375] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.445 [INFO][4375] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.465 [INFO][4375] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.469 [INFO][4375] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.477 [INFO][4375] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.477 [INFO][4375] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.480 [INFO][4375] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.494 [INFO][4375] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.508 [INFO][4375] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.508 [INFO][4375] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" host="localhost" Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.508 [INFO][4375] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:24.563249 containerd[1644]: 2025-11-05 04:50:24.508 [INFO][4375] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" HandleID="k8s-pod-network.3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Workload="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.563926 containerd[1644]: 2025-11-05 04:50:24.514 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--cqvt5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"42c91c94-d899-482d-a358-2c80842231e1", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-cqvt5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali157ee02580d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.563926 containerd[1644]: 2025-11-05 04:50:24.514 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.563926 containerd[1644]: 2025-11-05 04:50:24.514 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali157ee02580d ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.563926 containerd[1644]: 2025-11-05 04:50:24.517 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.563926 containerd[1644]: 2025-11-05 04:50:24.518 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--cqvt5-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"42c91c94-d899-482d-a358-2c80842231e1", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a", Pod:"goldmane-7c778bb748-cqvt5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali157ee02580d", MAC:"76:b0:39:08:ab:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.563926 containerd[1644]: 2025-11-05 04:50:24.534 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" Namespace="calico-system" Pod="goldmane-7c778bb748-cqvt5" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--cqvt5-eth0" Nov 5 04:50:24.589608 systemd-networkd[1534]: vxlan.calico: Gained IPv6LL Nov 5 04:50:24.595787 containerd[1644]: time="2025-11-05T04:50:24.594048066Z" level=info msg="Container 80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:50:24.604124 containerd[1644]: time="2025-11-05T04:50:24.604065012Z" level=info msg="connecting to shim 3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a" address="unix:///run/containerd/s/1f3f2044cf047863078b5451c3759f7c95c6c5b5b0fe5518e48ce62e9e56d19f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:24.604497 containerd[1644]: time="2025-11-05T04:50:24.604461166Z" level=info msg="CreateContainer within sandbox \"b1705ce5f4ab610522ebff37ae8b47aeb9392099e489e233d1bf62dd33d52015\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733\"" Nov 5 04:50:24.605770 containerd[1644]: time="2025-11-05T04:50:24.605747240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-kfpz4,Uid:28221097-2724-4244-b685-bd415dc30351,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d2250db1adef4d36bac3e19605f3f0b6479065d72810679cd200fe5e5241119f\"" Nov 5 04:50:24.607498 containerd[1644]: time="2025-11-05T04:50:24.607461058Z" level=info msg="StartContainer for \"80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733\"" Nov 5 04:50:24.609051 containerd[1644]: time="2025-11-05T04:50:24.609030354Z" level=info msg="connecting to shim 80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733" address="unix:///run/containerd/s/88de3560f569e4634691ae4f2f2e55bfdcff53119eb4c2fca76ce744c6dde8ec" protocol=ttrpc version=3 Nov 5 04:50:24.611976 containerd[1644]: time="2025-11-05T04:50:24.611941349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:50:24.630899 systemd[1]: Started cri-containerd-3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a.scope - libcontainer container 3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a. Nov 5 04:50:24.634595 systemd[1]: Started cri-containerd-80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733.scope - libcontainer container 80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733. Nov 5 04:50:24.647247 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:24.684912 sshd[4527]: Connection closed by 10.0.0.1 port 60136 Nov 5 04:50:24.685934 sshd-session[4468]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:24.689472 containerd[1644]: time="2025-11-05T04:50:24.689417993Z" level=info msg="StartContainer for \"80bb8ae9bb8cd924f07742b490050b909d9fb0348192d390c60d4a12b3b35733\" returns successfully" Nov 5 04:50:24.692938 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:60136.service: Deactivated successfully. Nov 5 04:50:24.697848 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 04:50:24.699181 systemd-logind[1625]: Session 11 logged out. Waiting for processes to exit. Nov 5 04:50:24.700996 systemd-logind[1625]: Removed session 11. Nov 5 04:50:24.701935 containerd[1644]: time="2025-11-05T04:50:24.701887344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-cqvt5,Uid:42c91c94-d899-482d-a358-2c80842231e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"3495008e4ce58f7c29e78af18895b8590056c06a4e0fe2f4b4117291132ee89a\"" Nov 5 04:50:24.860723 kubelet[2814]: E1105 04:50:24.860570 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:24.861001 containerd[1644]: time="2025-11-05T04:50:24.860956631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-swdbh,Uid:745e342d-8061-4ed4-94ef-e773f5826da7,Namespace:kube-system,Attempt:0,}" Nov 5 04:50:24.931595 containerd[1644]: time="2025-11-05T04:50:24.931465574Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:24.933140 containerd[1644]: time="2025-11-05T04:50:24.933096086Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:50:24.933362 containerd[1644]: time="2025-11-05T04:50:24.933144356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:24.933690 kubelet[2814]: E1105 04:50:24.933632 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:24.934270 kubelet[2814]: E1105 04:50:24.933795 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:24.934270 kubelet[2814]: E1105 04:50:24.933990 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5c9fdcbf84-kfpz4_calico-apiserver(28221097-2724-4244-b685-bd415dc30351): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:24.934270 kubelet[2814]: E1105 04:50:24.934049 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:50:24.934687 containerd[1644]: time="2025-11-05T04:50:24.934655483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:50:24.967348 systemd-networkd[1534]: calicd40a73dbb3: Link UP Nov 5 04:50:24.969318 systemd-networkd[1534]: calicd40a73dbb3: Gained carrier Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.902 [INFO][4637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--swdbh-eth0 coredns-66bc5c9577- kube-system 745e342d-8061-4ed4-94ef-e773f5826da7 838 0 2025-11-05 04:49:46 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-swdbh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicd40a73dbb3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.902 [INFO][4637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.928 [INFO][4651] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" HandleID="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Workload="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.928 [INFO][4651] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" HandleID="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Workload="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-swdbh", "timestamp":"2025-11-05 04:50:24.92875204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.929 [INFO][4651] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.929 [INFO][4651] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.929 [INFO][4651] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.936 [INFO][4651] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.941 [INFO][4651] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.946 [INFO][4651] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.947 [INFO][4651] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.950 [INFO][4651] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.950 [INFO][4651] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.952 [INFO][4651] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3 Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.956 [INFO][4651] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.961 [INFO][4651] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.961 [INFO][4651] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" host="localhost" Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.961 [INFO][4651] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:24.986053 containerd[1644]: 2025-11-05 04:50:24.962 [INFO][4651] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" HandleID="k8s-pod-network.087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Workload="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:24.986645 containerd[1644]: 2025-11-05 04:50:24.965 [INFO][4637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--swdbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"745e342d-8061-4ed4-94ef-e773f5826da7", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-swdbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd40a73dbb3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.986645 containerd[1644]: 2025-11-05 04:50:24.965 [INFO][4637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:24.986645 containerd[1644]: 2025-11-05 04:50:24.965 [INFO][4637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd40a73dbb3 ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:24.986645 containerd[1644]: 2025-11-05 04:50:24.969 [INFO][4637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:24.986645 containerd[1644]: 2025-11-05 04:50:24.970 [INFO][4637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--swdbh-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"745e342d-8061-4ed4-94ef-e773f5826da7", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3", Pod:"coredns-66bc5c9577-swdbh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicd40a73dbb3", MAC:"c6:b4:07:7e:cc:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:24.986645 containerd[1644]: 2025-11-05 04:50:24.981 [INFO][4637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" Namespace="kube-system" Pod="coredns-66bc5c9577-swdbh" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--swdbh-eth0" Nov 5 04:50:25.009958 containerd[1644]: time="2025-11-05T04:50:25.009907931Z" level=info msg="connecting to shim 087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3" address="unix:///run/containerd/s/870d95fbd865a4ee451fb4b98582ef6ea262cf49b31c96f8222238b65ab7bb49" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:25.046139 systemd[1]: Started cri-containerd-087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3.scope - libcontainer container 087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3. Nov 5 04:50:25.064043 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:25.084219 kubelet[2814]: E1105 04:50:25.084175 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:25.093543 kubelet[2814]: E1105 04:50:25.093398 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:50:25.102523 kubelet[2814]: I1105 04:50:25.102442 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dw9s5" podStartSLOduration=39.102405484 podStartE2EDuration="39.102405484s" podCreationTimestamp="2025-11-05 04:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:50:25.101892241 +0000 UTC m=+46.342201004" watchObservedRunningTime="2025-11-05 04:50:25.102405484 +0000 UTC m=+46.342714248" Nov 5 04:50:25.114314 containerd[1644]: time="2025-11-05T04:50:25.114253786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-swdbh,Uid:745e342d-8061-4ed4-94ef-e773f5826da7,Namespace:kube-system,Attempt:0,} returns sandbox id \"087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3\"" Nov 5 04:50:25.115976 kubelet[2814]: E1105 04:50:25.115862 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:25.120597 containerd[1644]: time="2025-11-05T04:50:25.120543053Z" level=info msg="CreateContainer within sandbox \"087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 04:50:25.137921 containerd[1644]: time="2025-11-05T04:50:25.137782837Z" level=info msg="Container b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6: CDI devices from CRI Config.CDIDevices: []" Nov 5 04:50:25.153454 containerd[1644]: time="2025-11-05T04:50:25.153312570Z" level=info msg="CreateContainer within sandbox \"087a407f90a8aad068d196bb859df47766850b5fb6b64f898bef9b4a3541ccf3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6\"" Nov 5 04:50:25.154889 containerd[1644]: time="2025-11-05T04:50:25.153977489Z" level=info msg="StartContainer for \"b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6\"" Nov 5 04:50:25.155578 containerd[1644]: time="2025-11-05T04:50:25.155535383Z" level=info msg="connecting to shim b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6" address="unix:///run/containerd/s/870d95fbd865a4ee451fb4b98582ef6ea262cf49b31c96f8222238b65ab7bb49" protocol=ttrpc version=3 Nov 5 04:50:25.182033 systemd[1]: Started cri-containerd-b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6.scope - libcontainer container b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6. Nov 5 04:50:25.225316 containerd[1644]: time="2025-11-05T04:50:25.225262840Z" level=info msg="StartContainer for \"b0578b102072fe701f54bfd88c2685d74a93c694c8813347274a2eb187d10ad6\" returns successfully" Nov 5 04:50:25.228966 systemd-networkd[1534]: calic0434744d32: Gained IPv6LL Nov 5 04:50:25.263931 containerd[1644]: time="2025-11-05T04:50:25.263875708Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:25.265296 containerd[1644]: time="2025-11-05T04:50:25.265265617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:50:25.268675 kubelet[2814]: E1105 04:50:25.268620 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:50:25.268675 kubelet[2814]: E1105 04:50:25.268669 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:50:25.268798 kubelet[2814]: E1105 04:50:25.268760 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cqvt5_calico-system(42c91c94-d899-482d-a358-2c80842231e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:25.268823 kubelet[2814]: E1105 04:50:25.268793 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cqvt5" podUID="42c91c94-d899-482d-a358-2c80842231e1" Nov 5 04:50:25.273710 containerd[1644]: time="2025-11-05T04:50:25.265345326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:25.861236 containerd[1644]: time="2025-11-05T04:50:25.861187556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-drbnr,Uid:6f45e524-4d73-4922-a936-2a28ed045d2c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 04:50:25.969942 systemd-networkd[1534]: cali14b5d5a6fcf: Link UP Nov 5 04:50:25.970840 systemd-networkd[1534]: cali14b5d5a6fcf: Gained carrier Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.894 [INFO][4753] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0 calico-apiserver-5c9fdcbf84- calico-apiserver 6f45e524-4d73-4922-a936-2a28ed045d2c 844 0 2025-11-05 04:49:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5c9fdcbf84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5c9fdcbf84-drbnr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14b5d5a6fcf [] [] }} ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.894 [INFO][4753] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.918 [INFO][4768] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" HandleID="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Workload="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.918 [INFO][4768] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" HandleID="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Workload="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5c9fdcbf84-drbnr", "timestamp":"2025-11-05 04:50:25.918339846 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.918 [INFO][4768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.918 [INFO][4768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.918 [INFO][4768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.925 [INFO][4768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.930 [INFO][4768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.934 [INFO][4768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.935 [INFO][4768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.937 [INFO][4768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.937 [INFO][4768] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.939 [INFO][4768] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4 Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.957 [INFO][4768] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.964 [INFO][4768] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.964 [INFO][4768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" host="localhost" Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.964 [INFO][4768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:25.985152 containerd[1644]: 2025-11-05 04:50:25.964 [INFO][4768] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" HandleID="k8s-pod-network.0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Workload="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:25.985782 containerd[1644]: 2025-11-05 04:50:25.967 [INFO][4753] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0", GenerateName:"calico-apiserver-5c9fdcbf84-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f45e524-4d73-4922-a936-2a28ed045d2c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9fdcbf84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5c9fdcbf84-drbnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14b5d5a6fcf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:25.985782 containerd[1644]: 2025-11-05 04:50:25.967 [INFO][4753] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:25.985782 containerd[1644]: 2025-11-05 04:50:25.968 [INFO][4753] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14b5d5a6fcf ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:25.985782 containerd[1644]: 2025-11-05 04:50:25.970 [INFO][4753] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:25.985782 containerd[1644]: 2025-11-05 04:50:25.971 [INFO][4753] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0", GenerateName:"calico-apiserver-5c9fdcbf84-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f45e524-4d73-4922-a936-2a28ed045d2c", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5c9fdcbf84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4", Pod:"calico-apiserver-5c9fdcbf84-drbnr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14b5d5a6fcf", MAC:"46:dc:cf:60:0a:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:25.985782 containerd[1644]: 2025-11-05 04:50:25.981 [INFO][4753] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" Namespace="calico-apiserver" Pod="calico-apiserver-5c9fdcbf84-drbnr" WorkloadEndpoint="localhost-k8s-calico--apiserver--5c9fdcbf84--drbnr-eth0" Nov 5 04:50:26.007968 containerd[1644]: time="2025-11-05T04:50:26.007916516Z" level=info msg="connecting to shim 0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4" address="unix:///run/containerd/s/89b8dd171d1757cc92011ec5481a64f182228a420afbc7714ccbd150c876fb3e" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:26.037035 systemd[1]: Started cri-containerd-0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4.scope - libcontainer container 0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4. Nov 5 04:50:26.051262 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:26.082939 containerd[1644]: time="2025-11-05T04:50:26.082791888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5c9fdcbf84-drbnr,Uid:6f45e524-4d73-4922-a936-2a28ed045d2c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e8517c9bd5309c47e59f5abac0757f1e944bbeb84eb3d1533750f11af617ce4\"" Nov 5 04:50:26.087958 containerd[1644]: time="2025-11-05T04:50:26.087921026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:50:26.095333 kubelet[2814]: E1105 04:50:26.095279 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:26.096158 kubelet[2814]: E1105 04:50:26.095479 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:26.096939 kubelet[2814]: E1105 04:50:26.096878 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cqvt5" podUID="42c91c94-d899-482d-a358-2c80842231e1" Nov 5 04:50:26.098084 kubelet[2814]: E1105 04:50:26.098048 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:50:26.115483 kubelet[2814]: I1105 04:50:26.115285 2814 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-swdbh" podStartSLOduration=40.115265446 podStartE2EDuration="40.115265446s" podCreationTimestamp="2025-11-05 04:49:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 04:50:26.115204692 +0000 UTC m=+47.355513455" watchObservedRunningTime="2025-11-05 04:50:26.115265446 +0000 UTC m=+47.355574209" Nov 5 04:50:26.316986 systemd-networkd[1534]: cali4fa31ca23e6: Gained IPv6LL Nov 5 04:50:26.380957 systemd-networkd[1534]: calicd40a73dbb3: Gained IPv6LL Nov 5 04:50:26.427769 containerd[1644]: time="2025-11-05T04:50:26.427678853Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:26.428846 containerd[1644]: time="2025-11-05T04:50:26.428802312Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:50:26.428943 containerd[1644]: time="2025-11-05T04:50:26.428898232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:26.429065 kubelet[2814]: E1105 04:50:26.429023 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:26.429127 kubelet[2814]: E1105 04:50:26.429078 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:26.429224 kubelet[2814]: E1105 04:50:26.429200 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5c9fdcbf84-drbnr_calico-apiserver(6f45e524-4d73-4922-a936-2a28ed045d2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:26.429280 kubelet[2814]: E1105 04:50:26.429245 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" podUID="6f45e524-4d73-4922-a936-2a28ed045d2c" Nov 5 04:50:26.572957 systemd-networkd[1534]: cali157ee02580d: Gained IPv6LL Nov 5 04:50:26.996412 containerd[1644]: time="2025-11-05T04:50:26.996353784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75b5b9c475-hhsfh,Uid:c1cae229-f08d-4b80-be11-a077ed5ab750,Namespace:calico-system,Attempt:0,}" Nov 5 04:50:27.097644 kubelet[2814]: E1105 04:50:27.097351 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:27.097644 kubelet[2814]: E1105 04:50:27.097457 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:27.097644 kubelet[2814]: E1105 04:50:27.097575 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" podUID="6f45e524-4d73-4922-a936-2a28ed045d2c" Nov 5 04:50:27.555725 systemd-networkd[1534]: cali55153b1f323: Link UP Nov 5 04:50:27.557110 systemd-networkd[1534]: cali55153b1f323: Gained carrier Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.472 [INFO][4835] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0 calico-kube-controllers-75b5b9c475- calico-system c1cae229-f08d-4b80-be11-a077ed5ab750 837 0 2025-11-05 04:49:59 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75b5b9c475 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75b5b9c475-hhsfh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali55153b1f323 [] [] }} ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.473 [INFO][4835] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.506 [INFO][4852] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" HandleID="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Workload="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.506 [INFO][4852] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" HandleID="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Workload="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c70c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75b5b9c475-hhsfh", "timestamp":"2025-11-05 04:50:27.506426202 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.506 [INFO][4852] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.506 [INFO][4852] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.506 [INFO][4852] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.514 [INFO][4852] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.520 [INFO][4852] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.528 [INFO][4852] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.530 [INFO][4852] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.532 [INFO][4852] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.532 [INFO][4852] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.534 [INFO][4852] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.538 [INFO][4852] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.546 [INFO][4852] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.546 [INFO][4852] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" host="localhost" Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.546 [INFO][4852] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 04:50:27.573506 containerd[1644]: 2025-11-05 04:50:27.546 [INFO][4852] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" HandleID="k8s-pod-network.d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Workload="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.574286 containerd[1644]: 2025-11-05 04:50:27.549 [INFO][4835] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0", GenerateName:"calico-kube-controllers-75b5b9c475-", Namespace:"calico-system", SelfLink:"", UID:"c1cae229-f08d-4b80-be11-a077ed5ab750", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75b5b9c475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75b5b9c475-hhsfh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55153b1f323", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:27.574286 containerd[1644]: 2025-11-05 04:50:27.549 [INFO][4835] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.574286 containerd[1644]: 2025-11-05 04:50:27.549 [INFO][4835] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali55153b1f323 ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.574286 containerd[1644]: 2025-11-05 04:50:27.556 [INFO][4835] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.574286 containerd[1644]: 2025-11-05 04:50:27.557 [INFO][4835] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0", GenerateName:"calico-kube-controllers-75b5b9c475-", Namespace:"calico-system", SelfLink:"", UID:"c1cae229-f08d-4b80-be11-a077ed5ab750", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 4, 49, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75b5b9c475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a", Pod:"calico-kube-controllers-75b5b9c475-hhsfh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali55153b1f323", MAC:"76:0c:e0:5a:c5:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 04:50:27.574286 containerd[1644]: 2025-11-05 04:50:27.570 [INFO][4835] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" Namespace="calico-system" Pod="calico-kube-controllers-75b5b9c475-hhsfh" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75b5b9c475--hhsfh-eth0" Nov 5 04:50:27.599252 containerd[1644]: time="2025-11-05T04:50:27.598926930Z" level=info msg="connecting to shim d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a" address="unix:///run/containerd/s/d9e9dfdab2ae7d8ff773743d4ea4269022f562395cd35dd0dccac34bd6848226" namespace=k8s.io protocol=ttrpc version=3 Nov 5 04:50:27.627006 systemd[1]: Started cri-containerd-d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a.scope - libcontainer container d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a. Nov 5 04:50:27.640850 systemd-resolved[1311]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 04:50:27.678478 containerd[1644]: time="2025-11-05T04:50:27.678266033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75b5b9c475-hhsfh,Uid:c1cae229-f08d-4b80-be11-a077ed5ab750,Namespace:calico-system,Attempt:0,} returns sandbox id \"d20c50e7f92964aecb943f465b70aa770815f9d10b2f0c804d66e51168e4172a\"" Nov 5 04:50:27.680914 containerd[1644]: time="2025-11-05T04:50:27.680870641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:50:27.981084 systemd-networkd[1534]: cali14b5d5a6fcf: Gained IPv6LL Nov 5 04:50:28.050457 containerd[1644]: time="2025-11-05T04:50:28.050387756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:28.090711 containerd[1644]: time="2025-11-05T04:50:28.090635639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:50:28.090711 containerd[1644]: time="2025-11-05T04:50:28.090700530Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:28.090954 kubelet[2814]: E1105 04:50:28.090913 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:50:28.090997 kubelet[2814]: E1105 04:50:28.090966 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:50:28.091071 kubelet[2814]: E1105 04:50:28.091046 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75b5b9c475-hhsfh_calico-system(c1cae229-f08d-4b80-be11-a077ed5ab750): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:28.091112 kubelet[2814]: E1105 04:50:28.091082 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:50:28.099544 kubelet[2814]: E1105 04:50:28.099512 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:28.100027 kubelet[2814]: E1105 04:50:28.099978 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:50:29.102249 kubelet[2814]: E1105 04:50:29.102179 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:50:29.388919 systemd-networkd[1534]: cali55153b1f323: Gained IPv6LL Nov 5 04:50:29.702161 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:60146.service - OpenSSH per-connection server daemon (10.0.0.1:60146). Nov 5 04:50:29.756729 sshd[4922]: Accepted publickey for core from 10.0.0.1 port 60146 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:29.758823 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:29.764006 systemd-logind[1625]: New session 12 of user core. Nov 5 04:50:29.771022 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 04:50:29.879750 sshd[4925]: Connection closed by 10.0.0.1 port 60146 Nov 5 04:50:29.879353 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:29.885607 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:60146.service: Deactivated successfully. Nov 5 04:50:29.888251 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 04:50:29.889240 systemd-logind[1625]: Session 12 logged out. Waiting for processes to exit. Nov 5 04:50:29.891090 systemd-logind[1625]: Removed session 12. Nov 5 04:50:34.902793 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:36748.service - OpenSSH per-connection server daemon (10.0.0.1:36748). Nov 5 04:50:34.972969 sshd[4948]: Accepted publickey for core from 10.0.0.1 port 36748 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:34.975087 sshd-session[4948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:34.980081 systemd-logind[1625]: New session 13 of user core. Nov 5 04:50:34.991869 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 04:50:35.077945 sshd[4951]: Connection closed by 10.0.0.1 port 36748 Nov 5 04:50:35.078309 sshd-session[4948]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:35.091201 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:36748.service: Deactivated successfully. Nov 5 04:50:35.093693 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 04:50:35.094803 systemd-logind[1625]: Session 13 logged out. Waiting for processes to exit. Nov 5 04:50:35.098272 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:36758.service - OpenSSH per-connection server daemon (10.0.0.1:36758). Nov 5 04:50:35.099391 systemd-logind[1625]: Removed session 13. Nov 5 04:50:35.157066 sshd[4965]: Accepted publickey for core from 10.0.0.1 port 36758 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:35.158604 sshd-session[4965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:35.163630 systemd-logind[1625]: New session 14 of user core. Nov 5 04:50:35.171876 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 04:50:35.293003 sshd[4968]: Connection closed by 10.0.0.1 port 36758 Nov 5 04:50:35.293594 sshd-session[4965]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:35.307419 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:36758.service: Deactivated successfully. Nov 5 04:50:35.311504 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 04:50:35.312664 systemd-logind[1625]: Session 14 logged out. Waiting for processes to exit. Nov 5 04:50:35.316572 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:36760.service - OpenSSH per-connection server daemon (10.0.0.1:36760). Nov 5 04:50:35.317401 systemd-logind[1625]: Removed session 14. Nov 5 04:50:35.367767 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 36760 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:35.369135 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:35.374119 systemd-logind[1625]: New session 15 of user core. Nov 5 04:50:35.381861 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 04:50:35.469138 sshd[4982]: Connection closed by 10.0.0.1 port 36760 Nov 5 04:50:35.471476 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:35.476969 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:36760.service: Deactivated successfully. Nov 5 04:50:35.479362 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 04:50:35.480820 systemd-logind[1625]: Session 15 logged out. Waiting for processes to exit. Nov 5 04:50:35.482158 systemd-logind[1625]: Removed session 15. Nov 5 04:50:35.860540 containerd[1644]: time="2025-11-05T04:50:35.860339646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:50:36.196880 containerd[1644]: time="2025-11-05T04:50:36.196804528Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:36.198456 containerd[1644]: time="2025-11-05T04:50:36.198349548Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:50:36.198456 containerd[1644]: time="2025-11-05T04:50:36.198412737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:36.198665 kubelet[2814]: E1105 04:50:36.198613 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:50:36.198665 kubelet[2814]: E1105 04:50:36.198662 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:50:36.199234 kubelet[2814]: E1105 04:50:36.198798 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57fcc4ffd7-8h8nc_calico-system(b3d93d76-fadd-41a3-bad6-e61b29990155): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:36.199780 containerd[1644]: time="2025-11-05T04:50:36.199722374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:50:36.534682 containerd[1644]: time="2025-11-05T04:50:36.534547436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:36.535823 containerd[1644]: time="2025-11-05T04:50:36.535793553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:50:36.535912 containerd[1644]: time="2025-11-05T04:50:36.535872150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:36.536054 kubelet[2814]: E1105 04:50:36.536014 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:50:36.536113 kubelet[2814]: E1105 04:50:36.536060 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:50:36.536172 kubelet[2814]: E1105 04:50:36.536150 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57fcc4ffd7-8h8nc_calico-system(b3d93d76-fadd-41a3-bad6-e61b29990155): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:36.536220 kubelet[2814]: E1105 04:50:36.536191 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57fcc4ffd7-8h8nc" podUID="b3d93d76-fadd-41a3-bad6-e61b29990155" Nov 5 04:50:36.860225 containerd[1644]: time="2025-11-05T04:50:36.859893474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:50:37.164323 containerd[1644]: time="2025-11-05T04:50:37.164255768Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:37.200049 containerd[1644]: time="2025-11-05T04:50:37.199970099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:50:37.200240 containerd[1644]: time="2025-11-05T04:50:37.200001420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:37.200308 kubelet[2814]: E1105 04:50:37.200266 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:50:37.200715 kubelet[2814]: E1105 04:50:37.200315 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:50:37.200715 kubelet[2814]: E1105 04:50:37.200397 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:37.201416 containerd[1644]: time="2025-11-05T04:50:37.201372420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:50:37.585759 containerd[1644]: time="2025-11-05T04:50:37.585591386Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:37.625122 containerd[1644]: time="2025-11-05T04:50:37.625078817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:37.625209 containerd[1644]: time="2025-11-05T04:50:37.625155886Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:50:37.625450 kubelet[2814]: E1105 04:50:37.625411 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:50:37.625505 kubelet[2814]: E1105 04:50:37.625462 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:50:37.625570 kubelet[2814]: E1105 04:50:37.625549 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:37.625619 kubelet[2814]: E1105 04:50:37.625593 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:37.860777 containerd[1644]: time="2025-11-05T04:50:37.860363930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:50:38.202543 containerd[1644]: time="2025-11-05T04:50:38.202474719Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:38.203848 containerd[1644]: time="2025-11-05T04:50:38.203805878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:50:38.203899 containerd[1644]: time="2025-11-05T04:50:38.203850084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:38.204202 kubelet[2814]: E1105 04:50:38.204138 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:38.204548 kubelet[2814]: E1105 04:50:38.204218 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:38.204548 kubelet[2814]: E1105 04:50:38.204350 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5c9fdcbf84-kfpz4_calico-apiserver(28221097-2724-4244-b685-bd415dc30351): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:38.204548 kubelet[2814]: E1105 04:50:38.204400 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:50:38.859808 containerd[1644]: time="2025-11-05T04:50:38.859753338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:50:39.202606 containerd[1644]: time="2025-11-05T04:50:39.202557049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:39.289456 containerd[1644]: time="2025-11-05T04:50:39.289399566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:39.289547 containerd[1644]: time="2025-11-05T04:50:39.289435495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:50:39.289771 kubelet[2814]: E1105 04:50:39.289702 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:50:39.290068 kubelet[2814]: E1105 04:50:39.289773 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:50:39.290068 kubelet[2814]: E1105 04:50:39.289856 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cqvt5_calico-system(42c91c94-d899-482d-a358-2c80842231e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:39.290068 kubelet[2814]: E1105 04:50:39.289890 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cqvt5" podUID="42c91c94-d899-482d-a358-2c80842231e1" Nov 5 04:50:40.483060 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:48062.service - OpenSSH per-connection server daemon (10.0.0.1:48062). Nov 5 04:50:40.538447 sshd[5003]: Accepted publickey for core from 10.0.0.1 port 48062 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:40.539960 sshd-session[5003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:40.544567 systemd-logind[1625]: New session 16 of user core. Nov 5 04:50:40.551874 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 04:50:40.626473 sshd[5006]: Connection closed by 10.0.0.1 port 48062 Nov 5 04:50:40.626829 sshd-session[5003]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:40.631964 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:48062.service: Deactivated successfully. Nov 5 04:50:40.634136 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 04:50:40.635050 systemd-logind[1625]: Session 16 logged out. Waiting for processes to exit. Nov 5 04:50:40.636708 systemd-logind[1625]: Removed session 16. Nov 5 04:50:41.859946 containerd[1644]: time="2025-11-05T04:50:41.859885235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:50:42.187646 containerd[1644]: time="2025-11-05T04:50:42.187559045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:42.189070 containerd[1644]: time="2025-11-05T04:50:42.189006009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:50:42.189138 containerd[1644]: time="2025-11-05T04:50:42.189098748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:42.189332 kubelet[2814]: E1105 04:50:42.189285 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:42.189812 kubelet[2814]: E1105 04:50:42.189339 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:50:42.189812 kubelet[2814]: E1105 04:50:42.189428 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5c9fdcbf84-drbnr_calico-apiserver(6f45e524-4d73-4922-a936-2a28ed045d2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:42.189812 kubelet[2814]: E1105 04:50:42.189464 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" podUID="6f45e524-4d73-4922-a936-2a28ed045d2c" Nov 5 04:50:43.860047 containerd[1644]: time="2025-11-05T04:50:43.859985698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:50:44.313018 containerd[1644]: time="2025-11-05T04:50:44.312955624Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:50:44.335139 containerd[1644]: time="2025-11-05T04:50:44.335054762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:50:44.335215 containerd[1644]: time="2025-11-05T04:50:44.335092425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:50:44.335473 kubelet[2814]: E1105 04:50:44.335417 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:50:44.335816 kubelet[2814]: E1105 04:50:44.335482 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:50:44.335816 kubelet[2814]: E1105 04:50:44.335581 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75b5b9c475-hhsfh_calico-system(c1cae229-f08d-4b80-be11-a077ed5ab750): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:50:44.335816 kubelet[2814]: E1105 04:50:44.335620 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:50:45.643761 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:48078.service - OpenSSH per-connection server daemon (10.0.0.1:48078). Nov 5 04:50:45.702977 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 48078 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:45.704334 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:45.708654 systemd-logind[1625]: New session 17 of user core. Nov 5 04:50:45.714883 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 04:50:45.792847 sshd[5028]: Connection closed by 10.0.0.1 port 48078 Nov 5 04:50:45.793257 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:45.798336 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:48078.service: Deactivated successfully. Nov 5 04:50:45.800823 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 04:50:45.801754 systemd-logind[1625]: Session 17 logged out. Waiting for processes to exit. Nov 5 04:50:45.803159 systemd-logind[1625]: Removed session 17. Nov 5 04:50:47.860706 kubelet[2814]: E1105 04:50:47.860629 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57fcc4ffd7-8h8nc" podUID="b3d93d76-fadd-41a3-bad6-e61b29990155" Nov 5 04:50:49.860246 kubelet[2814]: E1105 04:50:49.860117 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cqvt5" podUID="42c91c94-d899-482d-a358-2c80842231e1" Nov 5 04:50:49.860246 kubelet[2814]: E1105 04:50:49.860121 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:50:50.806781 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:45782.service - OpenSSH per-connection server daemon (10.0.0.1:45782). Nov 5 04:50:50.862936 kubelet[2814]: E1105 04:50:50.862841 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:50:50.868989 sshd[5045]: Accepted publickey for core from 10.0.0.1 port 45782 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:50.870825 sshd-session[5045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:50.875700 systemd-logind[1625]: New session 18 of user core. Nov 5 04:50:50.883974 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 04:50:50.993538 sshd[5048]: Connection closed by 10.0.0.1 port 45782 Nov 5 04:50:50.993901 sshd-session[5045]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:50.998553 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:45782.service: Deactivated successfully. Nov 5 04:50:51.000814 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 04:50:51.001591 systemd-logind[1625]: Session 18 logged out. Waiting for processes to exit. Nov 5 04:50:51.003205 systemd-logind[1625]: Removed session 18. Nov 5 04:50:51.859087 kubelet[2814]: E1105 04:50:51.859000 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:51.859288 kubelet[2814]: E1105 04:50:51.859186 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:54.148962 kubelet[2814]: E1105 04:50:54.148899 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:50:55.859948 kubelet[2814]: E1105 04:50:55.859886 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" podUID="6f45e524-4d73-4922-a936-2a28ed045d2c" Nov 5 04:50:56.008489 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:45792.service - OpenSSH per-connection server daemon (10.0.0.1:45792). Nov 5 04:50:56.076087 sshd[5091]: Accepted publickey for core from 10.0.0.1 port 45792 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:56.077863 sshd-session[5091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:56.085146 systemd-logind[1625]: New session 19 of user core. Nov 5 04:50:56.088903 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 04:50:56.176836 sshd[5094]: Connection closed by 10.0.0.1 port 45792 Nov 5 04:50:56.177228 sshd-session[5091]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:56.190242 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:45792.service: Deactivated successfully. Nov 5 04:50:56.192888 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 04:50:56.193878 systemd-logind[1625]: Session 19 logged out. Waiting for processes to exit. Nov 5 04:50:56.197278 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:45800.service - OpenSSH per-connection server daemon (10.0.0.1:45800). Nov 5 04:50:56.198346 systemd-logind[1625]: Removed session 19. Nov 5 04:50:56.259065 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 45800 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:56.260979 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:56.266516 systemd-logind[1625]: New session 20 of user core. Nov 5 04:50:56.274884 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 04:50:56.811202 sshd[5111]: Connection closed by 10.0.0.1 port 45800 Nov 5 04:50:56.811716 sshd-session[5108]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:56.827831 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:45800.service: Deactivated successfully. Nov 5 04:50:56.829995 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 04:50:56.830940 systemd-logind[1625]: Session 20 logged out. Waiting for processes to exit. Nov 5 04:50:56.834199 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:45808.service - OpenSSH per-connection server daemon (10.0.0.1:45808). Nov 5 04:50:56.835016 systemd-logind[1625]: Removed session 20. Nov 5 04:50:56.886329 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 45808 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:56.888273 sshd-session[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:56.893317 systemd-logind[1625]: New session 21 of user core. Nov 5 04:50:56.898885 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 04:50:57.371466 sshd[5126]: Connection closed by 10.0.0.1 port 45808 Nov 5 04:50:57.372438 sshd-session[5123]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:57.383953 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:45808.service: Deactivated successfully. Nov 5 04:50:57.386281 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 04:50:57.387145 systemd-logind[1625]: Session 21 logged out. Waiting for processes to exit. Nov 5 04:50:57.391338 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:45810.service - OpenSSH per-connection server daemon (10.0.0.1:45810). Nov 5 04:50:57.392261 systemd-logind[1625]: Removed session 21. Nov 5 04:50:57.449037 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 45810 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:57.450765 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:57.455611 systemd-logind[1625]: New session 22 of user core. Nov 5 04:50:57.463856 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 04:50:57.858563 sshd[5148]: Connection closed by 10.0.0.1 port 45810 Nov 5 04:50:57.860984 sshd-session[5145]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:57.863667 kubelet[2814]: E1105 04:50:57.863599 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:50:57.874921 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:45810.service: Deactivated successfully. Nov 5 04:50:57.877911 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 04:50:57.879492 systemd-logind[1625]: Session 22 logged out. Waiting for processes to exit. Nov 5 04:50:57.883789 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:45826.service - OpenSSH per-connection server daemon (10.0.0.1:45826). Nov 5 04:50:57.885308 systemd-logind[1625]: Removed session 22. Nov 5 04:50:57.945871 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 45826 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:50:57.947839 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:50:57.952760 systemd-logind[1625]: New session 23 of user core. Nov 5 04:50:57.959873 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 04:50:58.034335 sshd[5163]: Connection closed by 10.0.0.1 port 45826 Nov 5 04:50:58.034700 sshd-session[5160]: pam_unix(sshd:session): session closed for user core Nov 5 04:50:58.040009 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:45826.service: Deactivated successfully. Nov 5 04:50:58.042129 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 04:50:58.043097 systemd-logind[1625]: Session 23 logged out. Waiting for processes to exit. Nov 5 04:50:58.045050 systemd-logind[1625]: Removed session 23. Nov 5 04:50:59.858876 kubelet[2814]: E1105 04:50:59.858819 2814 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 04:51:00.860107 containerd[1644]: time="2025-11-05T04:51:00.860039240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 04:51:01.195585 containerd[1644]: time="2025-11-05T04:51:01.195499775Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:01.196721 containerd[1644]: time="2025-11-05T04:51:01.196655321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 04:51:01.196798 containerd[1644]: time="2025-11-05T04:51:01.196702902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:01.196969 kubelet[2814]: E1105 04:51:01.196922 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:51:01.197334 kubelet[2814]: E1105 04:51:01.196971 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 04:51:01.197334 kubelet[2814]: E1105 04:51:01.197066 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-cqvt5_calico-system(42c91c94-d899-482d-a358-2c80842231e1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:01.197334 kubelet[2814]: E1105 04:51:01.197099 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-cqvt5" podUID="42c91c94-d899-482d-a358-2c80842231e1" Nov 5 04:51:02.860419 containerd[1644]: time="2025-11-05T04:51:02.860356375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 04:51:03.046712 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:37468.service - OpenSSH per-connection server daemon (10.0.0.1:37468). Nov 5 04:51:03.117223 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 37468 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:51:03.118893 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:51:03.123583 systemd-logind[1625]: New session 24 of user core. Nov 5 04:51:03.135900 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 04:51:03.208769 containerd[1644]: time="2025-11-05T04:51:03.208694174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:03.210108 containerd[1644]: time="2025-11-05T04:51:03.210073325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 04:51:03.210187 containerd[1644]: time="2025-11-05T04:51:03.210161373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:03.210384 kubelet[2814]: E1105 04:51:03.210343 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:51:03.210779 kubelet[2814]: E1105 04:51:03.210396 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 04:51:03.210779 kubelet[2814]: E1105 04:51:03.210637 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-57fcc4ffd7-8h8nc_calico-system(b3d93d76-fadd-41a3-bad6-e61b29990155): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:03.211088 containerd[1644]: time="2025-11-05T04:51:03.211062653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:51:03.232930 sshd[5181]: Connection closed by 10.0.0.1 port 37468 Nov 5 04:51:03.233292 sshd-session[5178]: pam_unix(sshd:session): session closed for user core Nov 5 04:51:03.239789 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:37468.service: Deactivated successfully. Nov 5 04:51:03.241899 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 04:51:03.242769 systemd-logind[1625]: Session 24 logged out. Waiting for processes to exit. Nov 5 04:51:03.244147 systemd-logind[1625]: Removed session 24. Nov 5 04:51:03.604032 containerd[1644]: time="2025-11-05T04:51:03.603959662Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:03.649453 containerd[1644]: time="2025-11-05T04:51:03.649366692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:51:03.649453 containerd[1644]: time="2025-11-05T04:51:03.649432007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:03.649751 kubelet[2814]: E1105 04:51:03.649681 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:51:03.649751 kubelet[2814]: E1105 04:51:03.649750 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:51:03.650803 kubelet[2814]: E1105 04:51:03.650199 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5c9fdcbf84-kfpz4_calico-apiserver(28221097-2724-4244-b685-bd415dc30351): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:03.650803 kubelet[2814]: E1105 04:51:03.650247 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-kfpz4" podUID="28221097-2724-4244-b685-bd415dc30351" Nov 5 04:51:03.651468 containerd[1644]: time="2025-11-05T04:51:03.651411323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 04:51:03.946548 containerd[1644]: time="2025-11-05T04:51:03.946493164Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:03.947639 containerd[1644]: time="2025-11-05T04:51:03.947581591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 04:51:03.947760 containerd[1644]: time="2025-11-05T04:51:03.947672645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:03.947916 kubelet[2814]: E1105 04:51:03.947766 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:51:03.947916 kubelet[2814]: E1105 04:51:03.947804 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 04:51:03.948057 containerd[1644]: time="2025-11-05T04:51:03.948028914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 04:51:03.948119 kubelet[2814]: E1105 04:51:03.948079 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-57fcc4ffd7-8h8nc_calico-system(b3d93d76-fadd-41a3-bad6-e61b29990155): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:03.948183 kubelet[2814]: E1105 04:51:03.948154 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-57fcc4ffd7-8h8nc" podUID="b3d93d76-fadd-41a3-bad6-e61b29990155" Nov 5 04:51:04.267523 containerd[1644]: time="2025-11-05T04:51:04.267349224Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:04.286682 containerd[1644]: time="2025-11-05T04:51:04.286607598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:04.286889 containerd[1644]: time="2025-11-05T04:51:04.286682600Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 04:51:04.287126 kubelet[2814]: E1105 04:51:04.287052 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:51:04.287530 kubelet[2814]: E1105 04:51:04.287123 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 04:51:04.287530 kubelet[2814]: E1105 04:51:04.287239 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:04.288971 containerd[1644]: time="2025-11-05T04:51:04.288943031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 04:51:04.608914 containerd[1644]: time="2025-11-05T04:51:04.608756511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:04.629323 containerd[1644]: time="2025-11-05T04:51:04.629259657Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 04:51:04.629391 containerd[1644]: time="2025-11-05T04:51:04.629311757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:04.629574 kubelet[2814]: E1105 04:51:04.629516 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:51:04.629647 kubelet[2814]: E1105 04:51:04.629578 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 04:51:04.629728 kubelet[2814]: E1105 04:51:04.629679 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-zml8c_calico-system(f4956568-400e-4c71-8a7d-11217f3b2032): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:04.629728 kubelet[2814]: E1105 04:51:04.629727 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-zml8c" podUID="f4956568-400e-4c71-8a7d-11217f3b2032" Nov 5 04:51:08.247024 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:37484.service - OpenSSH per-connection server daemon (10.0.0.1:37484). Nov 5 04:51:08.302722 sshd[5204]: Accepted publickey for core from 10.0.0.1 port 37484 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:51:08.304094 sshd-session[5204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:51:08.308299 systemd-logind[1625]: New session 25 of user core. Nov 5 04:51:08.321870 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 04:51:08.395697 sshd[5207]: Connection closed by 10.0.0.1 port 37484 Nov 5 04:51:08.396038 sshd-session[5204]: pam_unix(sshd:session): session closed for user core Nov 5 04:51:08.400257 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:37484.service: Deactivated successfully. Nov 5 04:51:08.402512 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 04:51:08.403962 systemd-logind[1625]: Session 25 logged out. Waiting for processes to exit. Nov 5 04:51:08.405466 systemd-logind[1625]: Removed session 25. Nov 5 04:51:09.860587 containerd[1644]: time="2025-11-05T04:51:09.860490857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 04:51:10.198766 containerd[1644]: time="2025-11-05T04:51:10.198663289Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:10.200198 containerd[1644]: time="2025-11-05T04:51:10.200122736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 04:51:10.200198 containerd[1644]: time="2025-11-05T04:51:10.200198310Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:10.200479 kubelet[2814]: E1105 04:51:10.200308 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:51:10.200479 kubelet[2814]: E1105 04:51:10.200356 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 04:51:10.200479 kubelet[2814]: E1105 04:51:10.200463 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-5c9fdcbf84-drbnr_calico-apiserver(6f45e524-4d73-4922-a936-2a28ed045d2c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:10.200993 kubelet[2814]: E1105 04:51:10.200506 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5c9fdcbf84-drbnr" podUID="6f45e524-4d73-4922-a936-2a28ed045d2c" Nov 5 04:51:12.860311 containerd[1644]: time="2025-11-05T04:51:12.860129951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 04:51:13.226185 containerd[1644]: time="2025-11-05T04:51:13.226135192Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 04:51:13.339341 containerd[1644]: time="2025-11-05T04:51:13.339243289Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 04:51:13.339511 containerd[1644]: time="2025-11-05T04:51:13.339261213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Nov 5 04:51:13.339611 kubelet[2814]: E1105 04:51:13.339546 2814 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:51:13.340027 kubelet[2814]: E1105 04:51:13.339615 2814 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 04:51:13.340027 kubelet[2814]: E1105 04:51:13.339705 2814 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75b5b9c475-hhsfh_calico-system(c1cae229-f08d-4b80-be11-a077ed5ab750): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 04:51:13.340027 kubelet[2814]: E1105 04:51:13.339763 2814 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75b5b9c475-hhsfh" podUID="c1cae229-f08d-4b80-be11-a077ed5ab750" Nov 5 04:51:13.412872 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:37452.service - OpenSSH per-connection server daemon (10.0.0.1:37452). Nov 5 04:51:13.473218 sshd[5220]: Accepted publickey for core from 10.0.0.1 port 37452 ssh2: RSA SHA256:ZatT2KYk/W+SgsNd1KX2cLhj/vCBqJIAEu8qJTa1ixk Nov 5 04:51:13.475289 sshd-session[5220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 04:51:13.480778 systemd-logind[1625]: New session 26 of user core. Nov 5 04:51:13.496068 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 04:51:13.572267 sshd[5223]: Connection closed by 10.0.0.1 port 37452 Nov 5 04:51:13.572611 sshd-session[5220]: pam_unix(sshd:session): session closed for user core Nov 5 04:51:13.577615 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:37452.service: Deactivated successfully. Nov 5 04:51:13.579935 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 04:51:13.580789 systemd-logind[1625]: Session 26 logged out. Waiting for processes to exit. Nov 5 04:51:13.582283 systemd-logind[1625]: Removed session 26.