Nov 24 06:45:54.860225 kernel: Linux version 6.12.58-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Nov 23 20:49:05 -00 2025 Nov 24 06:45:54.860255 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:45:54.860269 kernel: BIOS-provided physical RAM map: Nov 24 06:45:54.860277 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 24 06:45:54.860283 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 24 06:45:54.860294 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 24 06:45:54.860302 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Nov 24 06:45:54.860309 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Nov 24 06:45:54.860315 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 24 06:45:54.860322 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 24 06:45:54.860348 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 24 06:45:54.860358 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 24 06:45:54.860364 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 24 06:45:54.860371 kernel: NX (Execute Disable) protection: active Nov 24 06:45:54.860379 kernel: APIC: Static calls initialized Nov 24 06:45:54.860386 kernel: SMBIOS 2.8 present. Nov 24 06:45:54.860395 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Nov 24 06:45:54.860402 kernel: DMI: Memory slots populated: 1/1 Nov 24 06:45:54.860409 kernel: Hypervisor detected: KVM Nov 24 06:45:54.860416 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 24 06:45:54.860423 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 24 06:45:54.860430 kernel: kvm-clock: using sched offset of 3744292277 cycles Nov 24 06:45:54.860438 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 24 06:45:54.860445 kernel: tsc: Detected 2794.734 MHz processor Nov 24 06:45:54.860452 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 24 06:45:54.860462 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 24 06:45:54.860472 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Nov 24 06:45:54.860481 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 24 06:45:54.860489 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 24 06:45:54.860496 kernel: Using GB pages for direct mapping Nov 24 06:45:54.860503 kernel: ACPI: Early table checksum verification disabled Nov 24 06:45:54.860510 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Nov 24 06:45:54.860518 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860525 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860532 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860542 kernel: ACPI: FACS 0x000000009CFE0000 000040 Nov 24 06:45:54.860549 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860556 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860563 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860571 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 24 06:45:54.860581 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Nov 24 06:45:54.860588 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Nov 24 06:45:54.860598 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Nov 24 06:45:54.860605 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Nov 24 06:45:54.860613 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Nov 24 06:45:54.860620 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Nov 24 06:45:54.860628 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Nov 24 06:45:54.860635 kernel: No NUMA configuration found Nov 24 06:45:54.860642 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Nov 24 06:45:54.860652 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Nov 24 06:45:54.860660 kernel: Zone ranges: Nov 24 06:45:54.860667 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 24 06:45:54.860674 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Nov 24 06:45:54.860682 kernel: Normal empty Nov 24 06:45:54.860689 kernel: Device empty Nov 24 06:45:54.860696 kernel: Movable zone start for each node Nov 24 06:45:54.860704 kernel: Early memory node ranges Nov 24 06:45:54.860711 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 24 06:45:54.860718 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Nov 24 06:45:54.860728 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Nov 24 06:45:54.860735 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 24 06:45:54.860743 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 24 06:45:54.860750 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Nov 24 06:45:54.860758 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 24 06:45:54.860765 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 24 06:45:54.860772 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 24 06:45:54.860780 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 24 06:45:54.860787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 24 06:45:54.860797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 24 06:45:54.860804 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 24 06:45:54.860811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 24 06:45:54.860819 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 24 06:45:54.860826 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 24 06:45:54.860834 kernel: TSC deadline timer available Nov 24 06:45:54.860841 kernel: CPU topo: Max. logical packages: 1 Nov 24 06:45:54.860848 kernel: CPU topo: Max. logical dies: 1 Nov 24 06:45:54.860856 kernel: CPU topo: Max. dies per package: 1 Nov 24 06:45:54.860865 kernel: CPU topo: Max. threads per core: 1 Nov 24 06:45:54.860872 kernel: CPU topo: Num. cores per package: 4 Nov 24 06:45:54.860879 kernel: CPU topo: Num. threads per package: 4 Nov 24 06:45:54.860887 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 24 06:45:54.860894 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 24 06:45:54.860901 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 24 06:45:54.860909 kernel: kvm-guest: setup PV sched yield Nov 24 06:45:54.860916 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 24 06:45:54.860923 kernel: Booting paravirtualized kernel on KVM Nov 24 06:45:54.860931 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 24 06:45:54.860941 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 24 06:45:54.860948 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 24 06:45:54.860956 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 24 06:45:54.860963 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 24 06:45:54.860970 kernel: kvm-guest: PV spinlocks enabled Nov 24 06:45:54.860978 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 24 06:45:54.860987 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:45:54.860995 kernel: random: crng init done Nov 24 06:45:54.861004 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 24 06:45:54.861012 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 24 06:45:54.861019 kernel: Fallback order for Node 0: 0 Nov 24 06:45:54.861027 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Nov 24 06:45:54.861034 kernel: Policy zone: DMA32 Nov 24 06:45:54.861041 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 24 06:45:54.861049 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 24 06:45:54.861056 kernel: ftrace: allocating 40103 entries in 157 pages Nov 24 06:45:54.861064 kernel: ftrace: allocated 157 pages with 5 groups Nov 24 06:45:54.861073 kernel: Dynamic Preempt: voluntary Nov 24 06:45:54.861080 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 24 06:45:54.861089 kernel: rcu: RCU event tracing is enabled. Nov 24 06:45:54.861096 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 24 06:45:54.861104 kernel: Trampoline variant of Tasks RCU enabled. Nov 24 06:45:54.861120 kernel: Rude variant of Tasks RCU enabled. Nov 24 06:45:54.861128 kernel: Tracing variant of Tasks RCU enabled. Nov 24 06:45:54.861136 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 24 06:45:54.861143 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 24 06:45:54.861151 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 06:45:54.861161 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 06:45:54.861168 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 24 06:45:54.861176 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 24 06:45:54.861184 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 24 06:45:54.861198 kernel: Console: colour VGA+ 80x25 Nov 24 06:45:54.861216 kernel: printk: legacy console [ttyS0] enabled Nov 24 06:45:54.861224 kernel: ACPI: Core revision 20240827 Nov 24 06:45:54.861232 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 24 06:45:54.861240 kernel: APIC: Switch to symmetric I/O mode setup Nov 24 06:45:54.861250 kernel: x2apic enabled Nov 24 06:45:54.861260 kernel: APIC: Switched APIC routing to: physical x2apic Nov 24 06:45:54.861274 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 24 06:45:54.861284 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 24 06:45:54.861295 kernel: kvm-guest: setup PV IPIs Nov 24 06:45:54.861306 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 24 06:45:54.861315 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848d2577f5, max_idle_ns: 440795325508 ns Nov 24 06:45:54.861325 kernel: Calibrating delay loop (skipped) preset value.. 5589.46 BogoMIPS (lpj=2794734) Nov 24 06:45:54.861347 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 24 06:45:54.861355 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 24 06:45:54.861363 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 24 06:45:54.861371 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 24 06:45:54.861379 kernel: Spectre V2 : Mitigation: Retpolines Nov 24 06:45:54.861387 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 24 06:45:54.861395 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 24 06:45:54.861402 kernel: active return thunk: retbleed_return_thunk Nov 24 06:45:54.861413 kernel: RETBleed: Mitigation: untrained return thunk Nov 24 06:45:54.861421 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 24 06:45:54.861429 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 24 06:45:54.861437 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 24 06:45:54.861446 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 24 06:45:54.861454 kernel: active return thunk: srso_return_thunk Nov 24 06:45:54.861462 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 24 06:45:54.861469 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 24 06:45:54.861480 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 24 06:45:54.861487 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 24 06:45:54.861495 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 24 06:45:54.861503 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 24 06:45:54.861511 kernel: Freeing SMP alternatives memory: 32K Nov 24 06:45:54.861519 kernel: pid_max: default: 32768 minimum: 301 Nov 24 06:45:54.861526 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 24 06:45:54.861534 kernel: landlock: Up and running. Nov 24 06:45:54.861542 kernel: SELinux: Initializing. Nov 24 06:45:54.861552 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 06:45:54.861559 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 24 06:45:54.861568 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 24 06:45:54.861575 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 24 06:45:54.861583 kernel: ... version: 0 Nov 24 06:45:54.861591 kernel: ... bit width: 48 Nov 24 06:45:54.861598 kernel: ... generic registers: 6 Nov 24 06:45:54.861606 kernel: ... value mask: 0000ffffffffffff Nov 24 06:45:54.861614 kernel: ... max period: 00007fffffffffff Nov 24 06:45:54.861624 kernel: ... fixed-purpose events: 0 Nov 24 06:45:54.861631 kernel: ... event mask: 000000000000003f Nov 24 06:45:54.861639 kernel: signal: max sigframe size: 1776 Nov 24 06:45:54.861647 kernel: rcu: Hierarchical SRCU implementation. Nov 24 06:45:54.861655 kernel: rcu: Max phase no-delay instances is 400. Nov 24 06:45:54.861663 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 24 06:45:54.861671 kernel: smp: Bringing up secondary CPUs ... Nov 24 06:45:54.861678 kernel: smpboot: x86: Booting SMP configuration: Nov 24 06:45:54.861686 kernel: .... node #0, CPUs: #1 #2 #3 Nov 24 06:45:54.861697 kernel: smp: Brought up 1 node, 4 CPUs Nov 24 06:45:54.861708 kernel: smpboot: Total of 4 processors activated (22357.87 BogoMIPS) Nov 24 06:45:54.861719 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Nov 24 06:45:54.861729 kernel: devtmpfs: initialized Nov 24 06:45:54.861739 kernel: x86/mm: Memory block size: 128MB Nov 24 06:45:54.861750 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 24 06:45:54.861761 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 24 06:45:54.861771 kernel: pinctrl core: initialized pinctrl subsystem Nov 24 06:45:54.861780 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 24 06:45:54.861791 kernel: audit: initializing netlink subsys (disabled) Nov 24 06:45:54.861799 kernel: audit: type=2000 audit(1763966752.690:1): state=initialized audit_enabled=0 res=1 Nov 24 06:45:54.861807 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 24 06:45:54.861814 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 24 06:45:54.861822 kernel: cpuidle: using governor menu Nov 24 06:45:54.861830 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 24 06:45:54.861838 kernel: dca service started, version 1.12.1 Nov 24 06:45:54.861846 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Nov 24 06:45:54.861855 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 24 06:45:54.861870 kernel: PCI: Using configuration type 1 for base access Nov 24 06:45:54.861879 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 24 06:45:54.861887 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 24 06:45:54.861895 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 24 06:45:54.861903 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 24 06:45:54.861911 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 24 06:45:54.861918 kernel: ACPI: Added _OSI(Module Device) Nov 24 06:45:54.861926 kernel: ACPI: Added _OSI(Processor Device) Nov 24 06:45:54.861934 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 24 06:45:54.861944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 24 06:45:54.861952 kernel: ACPI: Interpreter enabled Nov 24 06:45:54.861959 kernel: ACPI: PM: (supports S0 S3 S5) Nov 24 06:45:54.861967 kernel: ACPI: Using IOAPIC for interrupt routing Nov 24 06:45:54.861975 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 24 06:45:54.861983 kernel: PCI: Using E820 reservations for host bridge windows Nov 24 06:45:54.861991 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 24 06:45:54.861998 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 24 06:45:54.862194 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 24 06:45:54.862363 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 24 06:45:54.862493 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 24 06:45:54.862514 kernel: PCI host bridge to bus 0000:00 Nov 24 06:45:54.862692 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 24 06:45:54.862802 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 24 06:45:54.862908 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 24 06:45:54.863024 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Nov 24 06:45:54.863129 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 24 06:45:54.863245 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Nov 24 06:45:54.863378 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 24 06:45:54.863513 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 24 06:45:54.863639 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 24 06:45:54.863755 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Nov 24 06:45:54.863876 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Nov 24 06:45:54.863990 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Nov 24 06:45:54.864103 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 24 06:45:54.864249 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 24 06:45:54.864394 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Nov 24 06:45:54.864512 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Nov 24 06:45:54.864628 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Nov 24 06:45:54.864757 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 24 06:45:54.864874 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Nov 24 06:45:54.864990 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Nov 24 06:45:54.865105 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Nov 24 06:45:54.865245 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 24 06:45:54.865381 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Nov 24 06:45:54.865508 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Nov 24 06:45:54.865686 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Nov 24 06:45:54.865817 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Nov 24 06:45:54.865958 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 24 06:45:54.866088 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 24 06:45:54.866236 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 24 06:45:54.866382 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Nov 24 06:45:54.866505 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Nov 24 06:45:54.866629 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 24 06:45:54.867473 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Nov 24 06:45:54.867489 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 24 06:45:54.867498 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 24 06:45:54.867506 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 24 06:45:54.867514 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 24 06:45:54.867523 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 24 06:45:54.867534 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 24 06:45:54.867542 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 24 06:45:54.867550 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 24 06:45:54.867558 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 24 06:45:54.867566 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 24 06:45:54.867574 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 24 06:45:54.867582 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 24 06:45:54.867590 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 24 06:45:54.867598 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 24 06:45:54.867608 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 24 06:45:54.867616 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 24 06:45:54.867624 kernel: iommu: Default domain type: Translated Nov 24 06:45:54.867632 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 24 06:45:54.867640 kernel: PCI: Using ACPI for IRQ routing Nov 24 06:45:54.867648 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 24 06:45:54.867657 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 24 06:45:54.867665 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Nov 24 06:45:54.867788 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 24 06:45:54.867908 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 24 06:45:54.868022 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 24 06:45:54.868032 kernel: vgaarb: loaded Nov 24 06:45:54.868040 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 24 06:45:54.868049 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 24 06:45:54.868057 kernel: clocksource: Switched to clocksource kvm-clock Nov 24 06:45:54.868065 kernel: VFS: Disk quotas dquot_6.6.0 Nov 24 06:45:54.868073 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 24 06:45:54.868085 kernel: pnp: PnP ACPI init Nov 24 06:45:54.868221 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 24 06:45:54.868233 kernel: pnp: PnP ACPI: found 6 devices Nov 24 06:45:54.868241 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 24 06:45:54.868250 kernel: NET: Registered PF_INET protocol family Nov 24 06:45:54.868258 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 24 06:45:54.868266 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 24 06:45:54.868275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 24 06:45:54.868286 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 24 06:45:54.868294 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 24 06:45:54.868302 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 24 06:45:54.868310 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 06:45:54.868318 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 24 06:45:54.868326 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 24 06:45:54.868348 kernel: NET: Registered PF_XDP protocol family Nov 24 06:45:54.868462 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 24 06:45:54.868570 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 24 06:45:54.868681 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 24 06:45:54.868787 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Nov 24 06:45:54.868896 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 24 06:45:54.869003 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Nov 24 06:45:54.869015 kernel: PCI: CLS 0 bytes, default 64 Nov 24 06:45:54.869024 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848d2577f5, max_idle_ns: 440795325508 ns Nov 24 06:45:54.869032 kernel: Initialise system trusted keyrings Nov 24 06:45:54.869040 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 24 06:45:54.869051 kernel: Key type asymmetric registered Nov 24 06:45:54.869059 kernel: Asymmetric key parser 'x509' registered Nov 24 06:45:54.869067 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 24 06:45:54.869076 kernel: io scheduler mq-deadline registered Nov 24 06:45:54.869084 kernel: io scheduler kyber registered Nov 24 06:45:54.869092 kernel: io scheduler bfq registered Nov 24 06:45:54.869100 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 24 06:45:54.869109 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 24 06:45:54.869117 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 24 06:45:54.869128 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 24 06:45:54.869136 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 24 06:45:54.869144 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 24 06:45:54.869154 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 24 06:45:54.869164 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 24 06:45:54.869174 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 24 06:45:54.869184 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 24 06:45:54.869376 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 24 06:45:54.869504 kernel: rtc_cmos 00:04: registered as rtc0 Nov 24 06:45:54.869617 kernel: rtc_cmos 00:04: setting system clock to 2025-11-24T06:45:54 UTC (1763966754) Nov 24 06:45:54.869726 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Nov 24 06:45:54.869736 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 24 06:45:54.869744 kernel: NET: Registered PF_INET6 protocol family Nov 24 06:45:54.869752 kernel: Segment Routing with IPv6 Nov 24 06:45:54.869760 kernel: In-situ OAM (IOAM) with IPv6 Nov 24 06:45:54.869768 kernel: NET: Registered PF_PACKET protocol family Nov 24 06:45:54.869776 kernel: Key type dns_resolver registered Nov 24 06:45:54.869787 kernel: IPI shorthand broadcast: enabled Nov 24 06:45:54.869795 kernel: sched_clock: Marking stable (2793004412, 197458735)->(3036042684, -45579537) Nov 24 06:45:54.869803 kernel: registered taskstats version 1 Nov 24 06:45:54.869811 kernel: Loading compiled-in X.509 certificates Nov 24 06:45:54.869820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.58-flatcar: 960cbe7f2b1ea74b5c881d6d42eea4d1ac19a607' Nov 24 06:45:54.869827 kernel: Demotion targets for Node 0: null Nov 24 06:45:54.869835 kernel: Key type .fscrypt registered Nov 24 06:45:54.869843 kernel: Key type fscrypt-provisioning registered Nov 24 06:45:54.869851 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 24 06:45:54.869861 kernel: ima: Allocated hash algorithm: sha1 Nov 24 06:45:54.869869 kernel: ima: No architecture policies found Nov 24 06:45:54.869877 kernel: clk: Disabling unused clocks Nov 24 06:45:54.869885 kernel: Warning: unable to open an initial console. Nov 24 06:45:54.869894 kernel: Freeing unused kernel image (initmem) memory: 46200K Nov 24 06:45:54.869902 kernel: Write protecting the kernel read-only data: 40960k Nov 24 06:45:54.869910 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 24 06:45:54.869918 kernel: Run /init as init process Nov 24 06:45:54.869926 kernel: with arguments: Nov 24 06:45:54.869937 kernel: /init Nov 24 06:45:54.869945 kernel: with environment: Nov 24 06:45:54.869953 kernel: HOME=/ Nov 24 06:45:54.869961 kernel: TERM=linux Nov 24 06:45:54.869970 systemd[1]: Successfully made /usr/ read-only. Nov 24 06:45:54.869983 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 06:45:54.870006 systemd[1]: Detected virtualization kvm. Nov 24 06:45:54.870015 systemd[1]: Detected architecture x86-64. Nov 24 06:45:54.870023 systemd[1]: Running in initrd. Nov 24 06:45:54.870032 systemd[1]: No hostname configured, using default hostname. Nov 24 06:45:54.870041 systemd[1]: Hostname set to . Nov 24 06:45:54.870049 systemd[1]: Initializing machine ID from VM UUID. Nov 24 06:45:54.870058 systemd[1]: Queued start job for default target initrd.target. Nov 24 06:45:54.870067 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:45:54.870078 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:45:54.870088 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 24 06:45:54.870096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 06:45:54.870105 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 24 06:45:54.870115 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 24 06:45:54.870125 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 24 06:45:54.870134 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 24 06:45:54.870144 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:45:54.870153 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:45:54.870162 systemd[1]: Reached target paths.target - Path Units. Nov 24 06:45:54.870170 systemd[1]: Reached target slices.target - Slice Units. Nov 24 06:45:54.870179 systemd[1]: Reached target swap.target - Swaps. Nov 24 06:45:54.870187 systemd[1]: Reached target timers.target - Timer Units. Nov 24 06:45:54.870196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 06:45:54.870214 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 06:45:54.870224 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 24 06:45:54.870233 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 24 06:45:54.870242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:45:54.870250 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 06:45:54.870260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:45:54.870269 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 06:45:54.870278 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 24 06:45:54.870288 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 06:45:54.870297 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 24 06:45:54.870306 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 24 06:45:54.870315 systemd[1]: Starting systemd-fsck-usr.service... Nov 24 06:45:54.870325 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 06:45:54.870357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 06:45:54.870365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:54.870377 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 24 06:45:54.870386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:45:54.870395 systemd[1]: Finished systemd-fsck-usr.service. Nov 24 06:45:54.870432 systemd-journald[201]: Collecting audit messages is disabled. Nov 24 06:45:54.870457 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 24 06:45:54.870466 systemd-journald[201]: Journal started Nov 24 06:45:54.870493 systemd-journald[201]: Runtime Journal (/run/log/journal/0010f5b85c604812976691566ab1369c) is 6M, max 48.3M, 42.2M free. Nov 24 06:45:54.859066 systemd-modules-load[203]: Inserted module 'overlay' Nov 24 06:45:54.939935 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 06:45:54.939963 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 24 06:45:54.939976 kernel: Bridge firewalling registered Nov 24 06:45:54.888486 systemd-modules-load[203]: Inserted module 'br_netfilter' Nov 24 06:45:54.939811 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 06:45:54.940323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:54.945407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 24 06:45:54.947745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 24 06:45:54.954496 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 06:45:54.955965 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 06:45:54.969098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 06:45:54.979800 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:45:54.981309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:45:54.985737 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 06:45:54.987658 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 24 06:45:54.993926 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 24 06:45:55.004464 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:45:55.008441 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 06:45:55.022661 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a5a093dfb613b73c778207057706f88d5254927e05ae90617f314b938bd34a14 Nov 24 06:45:55.046050 systemd-resolved[243]: Positive Trust Anchors: Nov 24 06:45:55.046063 systemd-resolved[243]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 06:45:55.046091 systemd-resolved[243]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 06:45:55.048461 systemd-resolved[243]: Defaulting to hostname 'linux'. Nov 24 06:45:55.049529 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 06:45:55.050731 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:45:55.138362 kernel: SCSI subsystem initialized Nov 24 06:45:55.147369 kernel: Loading iSCSI transport class v2.0-870. Nov 24 06:45:55.158363 kernel: iscsi: registered transport (tcp) Nov 24 06:45:55.179373 kernel: iscsi: registered transport (qla4xxx) Nov 24 06:45:55.179400 kernel: QLogic iSCSI HBA Driver Nov 24 06:45:55.200507 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 06:45:55.224989 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:45:55.230664 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 06:45:55.286220 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 24 06:45:55.289374 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 24 06:45:55.349368 kernel: raid6: avx2x4 gen() 28916 MB/s Nov 24 06:45:55.366357 kernel: raid6: avx2x2 gen() 30672 MB/s Nov 24 06:45:55.384128 kernel: raid6: avx2x1 gen() 25554 MB/s Nov 24 06:45:55.384150 kernel: raid6: using algorithm avx2x2 gen() 30672 MB/s Nov 24 06:45:55.402130 kernel: raid6: .... xor() 19692 MB/s, rmw enabled Nov 24 06:45:55.402157 kernel: raid6: using avx2x2 recovery algorithm Nov 24 06:45:55.422365 kernel: xor: automatically using best checksumming function avx Nov 24 06:45:55.584387 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 24 06:45:55.593444 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 24 06:45:55.596940 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:45:55.630820 systemd-udevd[452]: Using default interface naming scheme 'v255'. Nov 24 06:45:55.637942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:45:55.639762 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 24 06:45:55.666405 dracut-pre-trigger[454]: rd.md=0: removing MD RAID activation Nov 24 06:45:55.697615 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 06:45:55.701433 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 06:45:55.788785 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:45:55.792434 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 24 06:45:55.823357 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 24 06:45:55.827564 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 24 06:45:55.830896 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 24 06:45:55.830918 kernel: cryptd: max_cpu_qlen set to 1000 Nov 24 06:45:55.830929 kernel: GPT:9289727 != 19775487 Nov 24 06:45:55.830938 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 24 06:45:55.830948 kernel: GPT:9289727 != 19775487 Nov 24 06:45:55.830969 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 24 06:45:55.830980 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:55.848356 kernel: AES CTR mode by8 optimization enabled Nov 24 06:45:55.863919 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:45:55.864033 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:55.868689 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:55.871394 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:45:55.874949 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:45:55.886374 kernel: libata version 3.00 loaded. Nov 24 06:45:55.910389 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 24 06:45:55.913360 kernel: ahci 0000:00:1f.2: version 3.0 Nov 24 06:45:55.915983 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 24 06:45:56.000072 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 24 06:45:56.000100 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 24 06:45:56.000284 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 24 06:45:56.000460 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 24 06:45:56.000594 kernel: scsi host0: ahci Nov 24 06:45:56.000743 kernel: scsi host1: ahci Nov 24 06:45:56.000879 kernel: scsi host2: ahci Nov 24 06:45:56.001025 kernel: scsi host3: ahci Nov 24 06:45:56.001271 kernel: scsi host4: ahci Nov 24 06:45:56.001433 kernel: scsi host5: ahci Nov 24 06:45:56.001569 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Nov 24 06:45:56.001580 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Nov 24 06:45:56.001591 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Nov 24 06:45:56.001601 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Nov 24 06:45:56.001612 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Nov 24 06:45:56.001626 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Nov 24 06:45:56.000409 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:56.019784 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 06:45:56.031718 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 24 06:45:56.036097 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 24 06:45:56.049956 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 24 06:45:56.060776 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 24 06:45:56.082865 disk-uuid[613]: Primary Header is updated. Nov 24 06:45:56.082865 disk-uuid[613]: Secondary Entries is updated. Nov 24 06:45:56.082865 disk-uuid[613]: Secondary Header is updated. Nov 24 06:45:56.087765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:56.092362 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:56.237356 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:56.237411 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:56.238370 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 24 06:45:56.239365 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:56.242373 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:56.242399 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 24 06:45:56.243370 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 06:45:56.244684 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 24 06:45:56.244698 kernel: ata3.00: applying bridge limits Nov 24 06:45:56.246576 kernel: ata3.00: LPM support broken, forcing max_power Nov 24 06:45:56.246599 kernel: ata3.00: configured for UDMA/100 Nov 24 06:45:56.249379 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 24 06:45:56.302802 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 24 06:45:56.303022 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 24 06:45:56.321563 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 24 06:45:56.693431 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 24 06:45:56.698387 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 06:45:56.702648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:45:56.706668 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 06:45:56.711230 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 24 06:45:56.737541 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 24 06:45:57.094363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 24 06:45:57.094442 disk-uuid[614]: The operation has completed successfully. Nov 24 06:45:57.122636 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 24 06:45:57.122751 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 24 06:45:57.159589 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 24 06:45:57.184808 sh[644]: Success Nov 24 06:45:57.205841 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 24 06:45:57.205876 kernel: device-mapper: uevent: version 1.0.3 Nov 24 06:45:57.207552 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 24 06:45:57.216350 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Nov 24 06:45:57.243382 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 24 06:45:57.248588 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 24 06:45:57.266939 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 24 06:45:57.272363 kernel: BTRFS: device fsid 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (656) Nov 24 06:45:57.276261 kernel: BTRFS info (device dm-0): first mount of filesystem 3af95a3e-5df6-49e0-91e3-ddf2109f68c7 Nov 24 06:45:57.276283 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:57.281565 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 24 06:45:57.281586 kernel: BTRFS info (device dm-0): enabling free space tree Nov 24 06:45:57.282835 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 24 06:45:57.284734 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 24 06:45:57.286376 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 24 06:45:57.289326 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 24 06:45:57.295500 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 24 06:45:57.322020 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (689) Nov 24 06:45:57.322073 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:57.322084 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:57.327105 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:45:57.327150 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:45:57.332376 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:57.333861 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 24 06:45:57.336576 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 24 06:45:57.430134 ignition[737]: Ignition 2.22.0 Nov 24 06:45:57.430148 ignition[737]: Stage: fetch-offline Nov 24 06:45:57.430196 ignition[737]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:57.430205 ignition[737]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:57.434269 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 06:45:57.430290 ignition[737]: parsed url from cmdline: "" Nov 24 06:45:57.437067 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 06:45:57.430297 ignition[737]: no config URL provided Nov 24 06:45:57.430302 ignition[737]: reading system config file "/usr/lib/ignition/user.ign" Nov 24 06:45:57.430310 ignition[737]: no config at "/usr/lib/ignition/user.ign" Nov 24 06:45:57.430367 ignition[737]: op(1): [started] loading QEMU firmware config module Nov 24 06:45:57.430373 ignition[737]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 24 06:45:57.450488 ignition[737]: op(1): [finished] loading QEMU firmware config module Nov 24 06:45:57.488506 systemd-networkd[833]: lo: Link UP Nov 24 06:45:57.488516 systemd-networkd[833]: lo: Gained carrier Nov 24 06:45:57.490461 systemd-networkd[833]: Enumeration completed Nov 24 06:45:57.490822 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:45:57.490827 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 06:45:57.491115 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 06:45:57.492114 systemd[1]: Reached target network.target - Network. Nov 24 06:45:57.492321 systemd-networkd[833]: eth0: Link UP Nov 24 06:45:57.492767 systemd-networkd[833]: eth0: Gained carrier Nov 24 06:45:57.492776 systemd-networkd[833]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:45:57.523404 systemd-networkd[833]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 06:45:57.552058 ignition[737]: parsing config with SHA512: 80ea5d6461eb11d09445d8d0a76a79a4691782a70cc2e085bed18ef9630a7d6ec393036ac65998e424414744b1a31a4d3faa3ae4f90024e3dffbbe7a8099ca99 Nov 24 06:45:57.558462 unknown[737]: fetched base config from "system" Nov 24 06:45:57.558482 unknown[737]: fetched user config from "qemu" Nov 24 06:45:57.558978 ignition[737]: fetch-offline: fetch-offline passed Nov 24 06:45:57.559060 ignition[737]: Ignition finished successfully Nov 24 06:45:57.562586 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 06:45:57.563459 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 24 06:45:57.565915 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 24 06:45:57.614111 ignition[839]: Ignition 2.22.0 Nov 24 06:45:57.614125 ignition[839]: Stage: kargs Nov 24 06:45:57.614299 ignition[839]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:57.614311 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:57.615362 ignition[839]: kargs: kargs passed Nov 24 06:45:57.615408 ignition[839]: Ignition finished successfully Nov 24 06:45:57.624663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 24 06:45:57.628826 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 24 06:45:57.668355 ignition[847]: Ignition 2.22.0 Nov 24 06:45:57.668376 ignition[847]: Stage: disks Nov 24 06:45:57.668562 ignition[847]: no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:57.668574 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:57.669649 ignition[847]: disks: disks passed Nov 24 06:45:57.674147 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 24 06:45:57.669702 ignition[847]: Ignition finished successfully Nov 24 06:45:57.675962 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 24 06:45:57.679162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 24 06:45:57.679707 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 06:45:57.685805 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 06:45:57.688939 systemd[1]: Reached target basic.target - Basic System. Nov 24 06:45:57.696227 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 24 06:45:57.724801 systemd-fsck[857]: ROOT: clean, 15/553520 files, 52789/553472 blocks Nov 24 06:45:57.732596 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 24 06:45:57.734690 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 24 06:45:57.851365 kernel: EXT4-fs (vda9): mounted filesystem f89e2a65-2a4a-426b-9659-02844cc29a2a r/w with ordered data mode. Quota mode: none. Nov 24 06:45:57.851632 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 24 06:45:57.852883 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 24 06:45:57.857889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 06:45:57.859918 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 24 06:45:57.861586 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 24 06:45:57.861630 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 24 06:45:57.861653 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 06:45:57.884171 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 24 06:45:57.888695 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 24 06:45:57.892405 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 24 06:45:57.894374 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:57.894411 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:57.900094 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:45:57.900167 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:45:57.902409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 06:45:57.928887 initrd-setup-root[889]: cut: /sysroot/etc/passwd: No such file or directory Nov 24 06:45:57.935695 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Nov 24 06:45:57.941113 initrd-setup-root[903]: cut: /sysroot/etc/shadow: No such file or directory Nov 24 06:45:57.944706 initrd-setup-root[910]: cut: /sysroot/etc/gshadow: No such file or directory Nov 24 06:45:58.045785 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 24 06:45:58.050391 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 24 06:45:58.054280 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 24 06:45:58.072805 kernel: BTRFS info (device vda6): last unmount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:58.084534 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 24 06:45:58.104772 ignition[979]: INFO : Ignition 2.22.0 Nov 24 06:45:58.104772 ignition[979]: INFO : Stage: mount Nov 24 06:45:58.107262 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:58.107262 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:58.107262 ignition[979]: INFO : mount: mount passed Nov 24 06:45:58.107262 ignition[979]: INFO : Ignition finished successfully Nov 24 06:45:58.115836 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 24 06:45:58.119117 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 24 06:45:58.274886 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 24 06:45:58.277067 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 24 06:45:58.309445 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (991) Nov 24 06:45:58.309489 kernel: BTRFS info (device vda6): first mount of filesystem 1e21b02a-5e52-4507-8281-b06fd4c187c7 Nov 24 06:45:58.309500 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 24 06:45:58.314729 kernel: BTRFS info (device vda6): turning on async discard Nov 24 06:45:58.314748 kernel: BTRFS info (device vda6): enabling free space tree Nov 24 06:45:58.316696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 24 06:45:58.351597 ignition[1008]: INFO : Ignition 2.22.0 Nov 24 06:45:58.351597 ignition[1008]: INFO : Stage: files Nov 24 06:45:58.354321 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:58.354321 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:58.354321 ignition[1008]: DEBUG : files: compiled without relabeling support, skipping Nov 24 06:45:58.354321 ignition[1008]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 24 06:45:58.354321 ignition[1008]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 24 06:45:58.364375 ignition[1008]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 24 06:45:58.364375 ignition[1008]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 24 06:45:58.364375 ignition[1008]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 24 06:45:58.364375 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 06:45:58.364375 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 24 06:45:58.355877 unknown[1008]: wrote ssh authorized keys file for user: core Nov 24 06:45:58.401344 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 24 06:45:58.475557 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 24 06:45:58.475557 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 24 06:45:58.483164 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:45:58.514788 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:45:58.514788 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:45:58.514788 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 24 06:45:58.913007 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 24 06:45:59.013509 systemd-networkd[833]: eth0: Gained IPv6LL Nov 24 06:45:59.285744 ignition[1008]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 24 06:45:59.285744 ignition[1008]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 24 06:45:59.291213 ignition[1008]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 06:45:59.295664 ignition[1008]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 24 06:45:59.295664 ignition[1008]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 24 06:45:59.295664 ignition[1008]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 24 06:45:59.302574 ignition[1008]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 06:45:59.305643 ignition[1008]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 24 06:45:59.305643 ignition[1008]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 24 06:45:59.305643 ignition[1008]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 24 06:45:59.324483 ignition[1008]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 06:45:59.330770 ignition[1008]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 24 06:45:59.333272 ignition[1008]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 24 06:45:59.333272 ignition[1008]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 24 06:45:59.333272 ignition[1008]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 24 06:45:59.333272 ignition[1008]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 24 06:45:59.333272 ignition[1008]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 24 06:45:59.333272 ignition[1008]: INFO : files: files passed Nov 24 06:45:59.333272 ignition[1008]: INFO : Ignition finished successfully Nov 24 06:45:59.339234 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 24 06:45:59.342057 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 24 06:45:59.346613 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 24 06:45:59.372591 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 24 06:45:59.372732 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 24 06:45:59.378581 initrd-setup-root-after-ignition[1037]: grep: /sysroot/oem/oem-release: No such file or directory Nov 24 06:45:59.383426 initrd-setup-root-after-ignition[1039]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:45:59.383426 initrd-setup-root-after-ignition[1039]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:45:59.388424 initrd-setup-root-after-ignition[1043]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 24 06:45:59.392263 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 06:45:59.393079 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 24 06:45:59.399663 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 24 06:45:59.461822 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 24 06:45:59.461951 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 24 06:45:59.463329 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 24 06:45:59.467768 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 24 06:45:59.470906 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 24 06:45:59.473598 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 24 06:45:59.494833 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 06:45:59.497272 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 24 06:45:59.519745 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:45:59.520746 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:45:59.521343 systemd[1]: Stopped target timers.target - Timer Units. Nov 24 06:45:59.528559 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 24 06:45:59.528668 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 24 06:45:59.533567 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 24 06:45:59.537479 systemd[1]: Stopped target basic.target - Basic System. Nov 24 06:45:59.540948 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 24 06:45:59.544072 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 24 06:45:59.547492 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 24 06:45:59.548435 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 24 06:45:59.554267 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 24 06:45:59.555262 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 24 06:45:59.555853 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 24 06:45:59.564051 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 24 06:45:59.567874 systemd[1]: Stopped target swap.target - Swaps. Nov 24 06:45:59.570968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 24 06:45:59.571101 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 24 06:45:59.576375 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:45:59.577320 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:45:59.583357 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 24 06:45:59.585363 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:45:59.588882 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 24 06:45:59.588993 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 24 06:45:59.594646 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 24 06:45:59.594762 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 24 06:45:59.595770 systemd[1]: Stopped target paths.target - Path Units. Nov 24 06:45:59.601443 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 24 06:45:59.607401 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:45:59.608130 systemd[1]: Stopped target slices.target - Slice Units. Nov 24 06:45:59.612288 systemd[1]: Stopped target sockets.target - Socket Units. Nov 24 06:45:59.614962 systemd[1]: iscsid.socket: Deactivated successfully. Nov 24 06:45:59.615051 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 24 06:45:59.617772 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 24 06:45:59.617851 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 24 06:45:59.618319 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 24 06:45:59.618444 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 24 06:45:59.624558 systemd[1]: ignition-files.service: Deactivated successfully. Nov 24 06:45:59.624658 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 24 06:45:59.632630 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 24 06:45:59.633676 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 24 06:45:59.633781 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:45:59.638074 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 24 06:45:59.646082 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 24 06:45:59.647813 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:45:59.651531 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 24 06:45:59.653101 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 24 06:45:59.661931 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 24 06:45:59.662043 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 24 06:45:59.669449 ignition[1064]: INFO : Ignition 2.22.0 Nov 24 06:45:59.669449 ignition[1064]: INFO : Stage: umount Nov 24 06:45:59.672024 ignition[1064]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 24 06:45:59.672024 ignition[1064]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 24 06:45:59.672024 ignition[1064]: INFO : umount: umount passed Nov 24 06:45:59.672024 ignition[1064]: INFO : Ignition finished successfully Nov 24 06:45:59.673374 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 24 06:45:59.673933 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 24 06:45:59.674042 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 24 06:45:59.678275 systemd[1]: Stopped target network.target - Network. Nov 24 06:45:59.680994 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 24 06:45:59.681124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 24 06:45:59.682295 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 24 06:45:59.682351 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 24 06:45:59.686864 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 24 06:45:59.686916 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 24 06:45:59.690402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 24 06:45:59.690450 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 24 06:45:59.693931 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 24 06:45:59.697132 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 24 06:45:59.704526 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 24 06:45:59.704670 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 24 06:45:59.712211 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 24 06:45:59.712552 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 24 06:45:59.712602 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:45:59.719659 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 24 06:45:59.719982 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 24 06:45:59.720134 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 24 06:45:59.726579 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 24 06:45:59.727618 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 24 06:45:59.728317 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 24 06:45:59.728402 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:45:59.737711 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 24 06:45:59.738576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 24 06:45:59.738624 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 24 06:45:59.745034 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 24 06:45:59.745080 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:45:59.748715 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 24 06:45:59.748765 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 24 06:45:59.749843 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:45:59.754952 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 24 06:45:59.774985 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 24 06:45:59.778498 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 24 06:45:59.796379 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 24 06:45:59.796519 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 24 06:45:59.797948 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 24 06:45:59.797999 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 24 06:45:59.805421 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 24 06:45:59.805596 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:45:59.806188 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 24 06:45:59.806237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 24 06:45:59.810886 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 24 06:45:59.810924 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:45:59.817547 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 24 06:45:59.817595 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 24 06:45:59.822682 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 24 06:45:59.822729 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 24 06:45:59.827067 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 24 06:45:59.827127 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 24 06:45:59.833554 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 24 06:45:59.834347 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 24 06:45:59.834394 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:45:59.842240 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 24 06:45:59.842286 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:45:59.847867 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 24 06:45:59.847911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:45:59.867221 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 24 06:45:59.867368 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 24 06:45:59.868370 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 24 06:45:59.873730 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 24 06:45:59.893901 systemd[1]: Switching root. Nov 24 06:45:59.931654 systemd-journald[201]: Journal stopped Nov 24 06:46:01.005142 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Nov 24 06:46:01.005204 kernel: SELinux: policy capability network_peer_controls=1 Nov 24 06:46:01.005218 kernel: SELinux: policy capability open_perms=1 Nov 24 06:46:01.005234 kernel: SELinux: policy capability extended_socket_class=1 Nov 24 06:46:01.005245 kernel: SELinux: policy capability always_check_network=0 Nov 24 06:46:01.005259 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 24 06:46:01.005270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 24 06:46:01.005286 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 24 06:46:01.005298 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 24 06:46:01.005309 kernel: SELinux: policy capability userspace_initial_context=0 Nov 24 06:46:01.005320 kernel: audit: type=1403 audit(1763966760.189:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 24 06:46:01.005350 systemd[1]: Successfully loaded SELinux policy in 61.491ms. Nov 24 06:46:01.005371 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.114ms. Nov 24 06:46:01.005384 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 24 06:46:01.005399 systemd[1]: Detected virtualization kvm. Nov 24 06:46:01.005411 systemd[1]: Detected architecture x86-64. Nov 24 06:46:01.005423 systemd[1]: Detected first boot. Nov 24 06:46:01.005435 systemd[1]: Initializing machine ID from VM UUID. Nov 24 06:46:01.005447 zram_generator::config[1110]: No configuration found. Nov 24 06:46:01.005462 kernel: Guest personality initialized and is inactive Nov 24 06:46:01.005473 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Nov 24 06:46:01.005484 kernel: Initialized host personality Nov 24 06:46:01.005501 kernel: NET: Registered PF_VSOCK protocol family Nov 24 06:46:01.005513 systemd[1]: Populated /etc with preset unit settings. Nov 24 06:46:01.005525 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 24 06:46:01.005537 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 24 06:46:01.005549 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 24 06:46:01.005561 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 24 06:46:01.005573 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 24 06:46:01.005585 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 24 06:46:01.005597 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 24 06:46:01.005611 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 24 06:46:01.005623 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 24 06:46:01.005635 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 24 06:46:01.005647 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 24 06:46:01.005659 systemd[1]: Created slice user.slice - User and Session Slice. Nov 24 06:46:01.005671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 24 06:46:01.005683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 24 06:46:01.005695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 24 06:46:01.005709 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 24 06:46:01.005728 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 24 06:46:01.005740 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 24 06:46:01.005752 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 24 06:46:01.005764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 24 06:46:01.005776 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 24 06:46:01.005788 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 24 06:46:01.005800 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 24 06:46:01.005814 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 24 06:46:01.005826 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 24 06:46:01.005838 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 24 06:46:01.005850 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 24 06:46:01.005862 systemd[1]: Reached target slices.target - Slice Units. Nov 24 06:46:01.005873 systemd[1]: Reached target swap.target - Swaps. Nov 24 06:46:01.005885 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 24 06:46:01.005897 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 24 06:46:01.005909 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 24 06:46:01.005923 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 24 06:46:01.005935 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 24 06:46:01.005947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 24 06:46:01.005959 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 24 06:46:01.005970 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 24 06:46:01.005983 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 24 06:46:01.005995 systemd[1]: Mounting media.mount - External Media Directory... Nov 24 06:46:01.006008 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:46:01.006020 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 24 06:46:01.006034 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 24 06:46:01.006046 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 24 06:46:01.006058 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 24 06:46:01.006071 systemd[1]: Reached target machines.target - Containers. Nov 24 06:46:01.006082 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 24 06:46:01.006102 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:46:01.006115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 24 06:46:01.006127 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 24 06:46:01.006141 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:46:01.006153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 06:46:01.006170 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:46:01.006182 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 24 06:46:01.006194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:46:01.006206 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 24 06:46:01.006218 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 24 06:46:01.006230 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 24 06:46:01.006243 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 24 06:46:01.006267 systemd[1]: Stopped systemd-fsck-usr.service. Nov 24 06:46:01.006281 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:46:01.006302 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 24 06:46:01.006319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 24 06:46:01.006356 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 24 06:46:01.006368 kernel: loop: module loaded Nov 24 06:46:01.006380 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 24 06:46:01.006392 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 24 06:46:01.006412 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 24 06:46:01.006438 systemd[1]: verity-setup.service: Deactivated successfully. Nov 24 06:46:01.006455 systemd[1]: Stopped verity-setup.service. Nov 24 06:46:01.006467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:46:01.006500 systemd-journald[1188]: Collecting audit messages is disabled. Nov 24 06:46:01.006523 kernel: ACPI: bus type drm_connector registered Nov 24 06:46:01.006535 systemd-journald[1188]: Journal started Nov 24 06:46:01.006557 systemd-journald[1188]: Runtime Journal (/run/log/journal/0010f5b85c604812976691566ab1369c) is 6M, max 48.3M, 42.2M free. Nov 24 06:46:00.705878 systemd[1]: Queued start job for default target multi-user.target. Nov 24 06:46:00.727303 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 24 06:46:00.727758 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 24 06:46:01.009376 systemd[1]: Started systemd-journald.service - Journal Service. Nov 24 06:46:01.011242 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 24 06:46:01.012386 kernel: fuse: init (API version 7.41) Nov 24 06:46:01.014469 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 24 06:46:01.016497 systemd[1]: Mounted media.mount - External Media Directory. Nov 24 06:46:01.018371 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 24 06:46:01.020479 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 24 06:46:01.022539 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 24 06:46:01.024431 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 24 06:46:01.026625 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 24 06:46:01.028911 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 24 06:46:01.029148 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 24 06:46:01.031369 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:46:01.031583 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:46:01.033687 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 06:46:01.033895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 06:46:01.035882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:46:01.036100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:46:01.038450 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 24 06:46:01.038661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 24 06:46:01.040672 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:46:01.040880 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:46:01.042935 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 24 06:46:01.045006 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 24 06:46:01.047350 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 24 06:46:01.049653 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 24 06:46:01.064708 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 24 06:46:01.067812 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 24 06:46:01.070541 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 24 06:46:01.072344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 24 06:46:01.072436 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 24 06:46:01.075010 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 24 06:46:01.088916 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 24 06:46:01.090853 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:46:01.092458 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 24 06:46:01.095313 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 24 06:46:01.096054 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:46:01.097060 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 24 06:46:01.099131 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:46:01.105433 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 24 06:46:01.108503 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 24 06:46:01.111904 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 24 06:46:01.117254 systemd-journald[1188]: Time spent on flushing to /var/log/journal/0010f5b85c604812976691566ab1369c is 31.048ms for 979 entries. Nov 24 06:46:01.117254 systemd-journald[1188]: System Journal (/var/log/journal/0010f5b85c604812976691566ab1369c) is 8M, max 195.6M, 187.6M free. Nov 24 06:46:01.158204 systemd-journald[1188]: Received client request to flush runtime journal. Nov 24 06:46:01.158252 kernel: loop0: detected capacity change from 0 to 110984 Nov 24 06:46:01.123523 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 24 06:46:01.126752 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 24 06:46:01.130376 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 24 06:46:01.132920 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 24 06:46:01.141119 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 24 06:46:01.144576 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 24 06:46:01.154555 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 24 06:46:01.160296 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 24 06:46:01.170347 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 24 06:46:01.184719 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 24 06:46:01.190536 kernel: loop1: detected capacity change from 0 to 219144 Nov 24 06:46:01.191501 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 24 06:46:01.194300 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 24 06:46:01.218934 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 24 06:46:01.219481 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Nov 24 06:46:01.229383 kernel: loop2: detected capacity change from 0 to 128560 Nov 24 06:46:01.226134 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 24 06:46:01.262377 kernel: loop3: detected capacity change from 0 to 110984 Nov 24 06:46:01.273368 kernel: loop4: detected capacity change from 0 to 219144 Nov 24 06:46:01.286369 kernel: loop5: detected capacity change from 0 to 128560 Nov 24 06:46:01.301362 (sd-merge)[1253]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 24 06:46:01.301896 (sd-merge)[1253]: Merged extensions into '/usr'. Nov 24 06:46:01.307294 systemd[1]: Reload requested from client PID 1229 ('systemd-sysext') (unit systemd-sysext.service)... Nov 24 06:46:01.307413 systemd[1]: Reloading... Nov 24 06:46:01.388190 zram_generator::config[1285]: No configuration found. Nov 24 06:46:01.413864 ldconfig[1224]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 24 06:46:01.568969 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 24 06:46:01.569290 systemd[1]: Reloading finished in 261 ms. Nov 24 06:46:01.612981 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 24 06:46:01.615165 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 24 06:46:01.617366 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 24 06:46:01.644023 systemd[1]: Starting ensure-sysext.service... Nov 24 06:46:01.646390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 24 06:46:01.649675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 24 06:46:01.669882 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Nov 24 06:46:01.669974 systemd[1]: Reloading... Nov 24 06:46:01.682356 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 24 06:46:01.682411 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 24 06:46:01.682840 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 24 06:46:01.683225 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 24 06:46:01.684554 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 24 06:46:01.684969 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 24 06:46:01.685071 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 24 06:46:01.690047 systemd-udevd[1320]: Using default interface naming scheme 'v255'. Nov 24 06:46:01.690794 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 06:46:01.690895 systemd-tmpfiles[1319]: Skipping /boot Nov 24 06:46:01.700845 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Nov 24 06:46:01.700858 systemd-tmpfiles[1319]: Skipping /boot Nov 24 06:46:01.726361 zram_generator::config[1350]: No configuration found. Nov 24 06:46:01.851391 kernel: mousedev: PS/2 mouse device common for all mice Nov 24 06:46:01.861372 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 24 06:46:01.867408 kernel: ACPI: button: Power Button [PWRF] Nov 24 06:46:01.884479 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 24 06:46:01.884770 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 24 06:46:01.962715 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 24 06:46:01.962999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 24 06:46:01.965535 systemd[1]: Reloading finished in 295 ms. Nov 24 06:46:01.980656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 24 06:46:02.036462 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 24 06:46:02.044113 kernel: kvm_amd: TSC scaling supported Nov 24 06:46:02.044157 kernel: kvm_amd: Nested Virtualization enabled Nov 24 06:46:02.044171 kernel: kvm_amd: Nested Paging enabled Nov 24 06:46:02.044851 kernel: kvm_amd: LBR virtualization supported Nov 24 06:46:02.046601 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 24 06:46:02.046620 kernel: kvm_amd: Virtual GIF supported Nov 24 06:46:02.067660 systemd[1]: Finished ensure-sysext.service. Nov 24 06:46:02.072370 kernel: EDAC MC: Ver: 3.0.0 Nov 24 06:46:02.093130 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:46:02.094296 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:46:02.097211 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 24 06:46:02.099133 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 24 06:46:02.100196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 24 06:46:02.102428 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 24 06:46:02.111502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 24 06:46:02.114238 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 24 06:46:02.115943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 24 06:46:02.117006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 24 06:46:02.119164 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 24 06:46:02.120585 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 24 06:46:02.124532 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 24 06:46:02.130531 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 24 06:46:02.133593 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 24 06:46:02.136459 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 24 06:46:02.139148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 24 06:46:02.141081 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 24 06:46:02.142131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 24 06:46:02.142352 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 24 06:46:02.144875 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 24 06:46:02.145105 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 24 06:46:02.147744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 24 06:46:02.147948 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 24 06:46:02.151279 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 24 06:46:02.151529 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 24 06:46:02.153740 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 24 06:46:02.166456 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 24 06:46:02.169543 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 24 06:46:02.171096 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 24 06:46:02.171190 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 24 06:46:02.173559 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 24 06:46:02.177808 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 24 06:46:02.178550 augenrules[1483]: No rules Nov 24 06:46:02.179153 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:46:02.184629 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:46:02.193202 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 24 06:46:02.196229 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 24 06:46:02.199193 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 24 06:46:02.228434 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 24 06:46:02.285843 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 24 06:46:02.306532 systemd-networkd[1449]: lo: Link UP Nov 24 06:46:02.306541 systemd-networkd[1449]: lo: Gained carrier Nov 24 06:46:02.308106 systemd-networkd[1449]: Enumeration completed Nov 24 06:46:02.308234 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 24 06:46:02.309261 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:46:02.309271 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 24 06:46:02.310017 systemd-networkd[1449]: eth0: Link UP Nov 24 06:46:02.310186 systemd-networkd[1449]: eth0: Gained carrier Nov 24 06:46:02.310205 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 24 06:46:02.310421 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 24 06:46:02.312465 systemd[1]: Reached target time-set.target - System Time Set. Nov 24 06:46:02.315399 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 24 06:46:02.318208 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 24 06:46:02.321223 systemd-resolved[1456]: Positive Trust Anchors: Nov 24 06:46:02.321241 systemd-resolved[1456]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 24 06:46:02.321275 systemd-resolved[1456]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 24 06:46:02.325491 systemd-resolved[1456]: Defaulting to hostname 'linux'. Nov 24 06:46:02.326989 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 24 06:46:02.327643 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.32/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 24 06:46:02.328501 systemd-timesyncd[1457]: Network configuration changed, trying to establish connection. Nov 24 06:46:02.328886 systemd[1]: Reached target network.target - Network. Nov 24 06:46:02.330405 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 24 06:46:03.428073 systemd-resolved[1456]: Clock change detected. Flushing caches. Nov 24 06:46:03.428099 systemd[1]: Reached target sysinit.target - System Initialization. Nov 24 06:46:03.428180 systemd-timesyncd[1457]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 24 06:46:03.428242 systemd-timesyncd[1457]: Initial clock synchronization to Mon 2025-11-24 06:46:03.428031 UTC. Nov 24 06:46:03.429924 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 24 06:46:03.431955 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 24 06:46:03.433932 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 24 06:46:03.435899 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 24 06:46:03.437704 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 24 06:46:03.439702 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 24 06:46:03.441684 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 24 06:46:03.441715 systemd[1]: Reached target paths.target - Path Units. Nov 24 06:46:03.443148 systemd[1]: Reached target timers.target - Timer Units. Nov 24 06:46:03.445390 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 24 06:46:03.448735 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 24 06:46:03.452000 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 24 06:46:03.454092 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 24 06:46:03.456074 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 24 06:46:03.460246 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 24 06:46:03.462335 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 24 06:46:03.465382 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 24 06:46:03.467612 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 24 06:46:03.471392 systemd[1]: Reached target sockets.target - Socket Units. Nov 24 06:46:03.472935 systemd[1]: Reached target basic.target - Basic System. Nov 24 06:46:03.474469 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 24 06:46:03.474499 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 24 06:46:03.475645 systemd[1]: Starting containerd.service - containerd container runtime... Nov 24 06:46:03.478399 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 24 06:46:03.493846 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 24 06:46:03.496819 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 24 06:46:03.499375 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 24 06:46:03.500967 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 24 06:46:03.509090 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 24 06:46:03.510465 jq[1513]: false Nov 24 06:46:03.512834 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 24 06:46:03.515819 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 24 06:46:03.517536 extend-filesystems[1514]: Found /dev/vda6 Nov 24 06:46:03.518636 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 24 06:46:03.521915 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Nov 24 06:46:03.520929 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Nov 24 06:46:03.522950 extend-filesystems[1514]: Found /dev/vda9 Nov 24 06:46:03.524839 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 24 06:46:03.526705 extend-filesystems[1514]: Checking size of /dev/vda9 Nov 24 06:46:03.529321 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Nov 24 06:46:03.529321 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 06:46:03.529311 oslogin_cache_refresh[1515]: Failure getting users, quitting Nov 24 06:46:03.529429 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Nov 24 06:46:03.529325 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 24 06:46:03.529366 oslogin_cache_refresh[1515]: Refreshing group entry cache Nov 24 06:46:03.534926 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 24 06:46:03.538465 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Nov 24 06:46:03.538465 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 06:46:03.535542 oslogin_cache_refresh[1515]: Failure getting groups, quitting Nov 24 06:46:03.537420 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 24 06:46:03.535553 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 24 06:46:03.538186 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 24 06:46:03.539797 systemd[1]: Starting update-engine.service - Update Engine... Nov 24 06:46:03.542657 extend-filesystems[1514]: Resized partition /dev/vda9 Nov 24 06:46:03.543634 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 24 06:46:03.551321 extend-filesystems[1538]: resize2fs 1.47.3 (8-Jul-2025) Nov 24 06:46:03.554006 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 24 06:46:03.558481 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 24 06:46:03.559843 jq[1536]: true Nov 24 06:46:03.560887 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 24 06:46:03.561148 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 24 06:46:03.561530 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 24 06:46:03.561762 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 24 06:46:03.564105 systemd[1]: motdgen.service: Deactivated successfully. Nov 24 06:46:03.564531 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 24 06:46:03.567956 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 24 06:46:03.572190 update_engine[1533]: I20251124 06:46:03.570225 1533 main.cc:92] Flatcar Update Engine starting Nov 24 06:46:03.568236 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 24 06:46:03.588233 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 24 06:46:03.591788 jq[1543]: true Nov 24 06:46:03.631479 tar[1542]: linux-amd64/LICENSE Nov 24 06:46:03.631479 tar[1542]: linux-amd64/helm Nov 24 06:46:03.634690 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Nov 24 06:46:03.634715 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 24 06:46:03.635062 systemd-logind[1532]: New seat seat0. Nov 24 06:46:03.636792 systemd[1]: Started systemd-logind.service - User Login Management. Nov 24 06:46:03.660035 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 24 06:46:03.682189 dbus-daemon[1511]: [system] SELinux support is enabled Nov 24 06:46:03.682569 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 24 06:46:03.683029 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 24 06:46:03.683029 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 24 06:46:03.683029 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 24 06:46:03.691991 extend-filesystems[1514]: Resized filesystem in /dev/vda9 Nov 24 06:46:03.694004 bash[1572]: Updated "/home/core/.ssh/authorized_keys" Nov 24 06:46:03.694830 update_engine[1533]: I20251124 06:46:03.694662 1533 update_check_scheduler.cc:74] Next update check in 2m40s Nov 24 06:46:03.697191 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 24 06:46:03.697595 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 24 06:46:03.700323 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 24 06:46:03.706058 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 24 06:46:03.706205 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 24 06:46:03.706244 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 24 06:46:03.709174 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 24 06:46:03.709339 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 24 06:46:03.710146 dbus-daemon[1511]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 24 06:46:03.712388 systemd[1]: Started update-engine.service - Update Engine. Nov 24 06:46:03.718749 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 24 06:46:03.767768 sshd_keygen[1541]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 24 06:46:03.769478 locksmithd[1576]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 24 06:46:03.792755 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 24 06:46:03.797432 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 24 06:46:03.816774 systemd[1]: issuegen.service: Deactivated successfully. Nov 24 06:46:03.817128 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 24 06:46:03.823071 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 24 06:46:03.839507 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 24 06:46:03.844270 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 24 06:46:03.848385 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 24 06:46:03.851034 systemd[1]: Reached target getty.target - Login Prompts. Nov 24 06:46:03.857110 containerd[1544]: time="2025-11-24T06:46:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 24 06:46:03.860460 containerd[1544]: time="2025-11-24T06:46:03.858133001Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 24 06:46:03.871054 containerd[1544]: time="2025-11-24T06:46:03.870997040Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.187µs" Nov 24 06:46:03.871054 containerd[1544]: time="2025-11-24T06:46:03.871042745Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 24 06:46:03.871108 containerd[1544]: time="2025-11-24T06:46:03.871058796Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 24 06:46:03.871273 containerd[1544]: time="2025-11-24T06:46:03.871247320Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 24 06:46:03.871273 containerd[1544]: time="2025-11-24T06:46:03.871269622Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 24 06:46:03.871332 containerd[1544]: time="2025-11-24T06:46:03.871293116Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871409 containerd[1544]: time="2025-11-24T06:46:03.871372125Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871409 containerd[1544]: time="2025-11-24T06:46:03.871394136Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871816 containerd[1544]: time="2025-11-24T06:46:03.871774762Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871816 containerd[1544]: time="2025-11-24T06:46:03.871801703Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871881 containerd[1544]: time="2025-11-24T06:46:03.871815759Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871881 containerd[1544]: time="2025-11-24T06:46:03.871826249Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 24 06:46:03.871946 containerd[1544]: time="2025-11-24T06:46:03.871924844Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 24 06:46:03.872241 containerd[1544]: time="2025-11-24T06:46:03.872204580Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 06:46:03.872269 containerd[1544]: time="2025-11-24T06:46:03.872248282Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 24 06:46:03.872269 containerd[1544]: time="2025-11-24T06:46:03.872264202Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 24 06:46:03.872335 containerd[1544]: time="2025-11-24T06:46:03.872311261Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 24 06:46:03.872677 containerd[1544]: time="2025-11-24T06:46:03.872634579Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 24 06:46:03.872780 containerd[1544]: time="2025-11-24T06:46:03.872722955Z" level=info msg="metadata content store policy set" policy=shared Nov 24 06:46:03.878748 containerd[1544]: time="2025-11-24T06:46:03.878685977Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 24 06:46:03.878748 containerd[1544]: time="2025-11-24T06:46:03.878725822Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 24 06:46:03.878748 containerd[1544]: time="2025-11-24T06:46:03.878739458Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 24 06:46:03.878748 containerd[1544]: time="2025-11-24T06:46:03.878751501Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878763183Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878773382Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878784773Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878795163Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878814018Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878823817Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878832603Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 24 06:46:03.878889 containerd[1544]: time="2025-11-24T06:46:03.878844956Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.878950455Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.878968539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.878981293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.878992714Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879003815Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879014425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879025606Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879044922Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879063868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879078055Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 24 06:46:03.879083 containerd[1544]: time="2025-11-24T06:46:03.879091791Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 24 06:46:03.879372 containerd[1544]: time="2025-11-24T06:46:03.879151693Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 24 06:46:03.879372 containerd[1544]: time="2025-11-24T06:46:03.879183172Z" level=info msg="Start snapshots syncer" Nov 24 06:46:03.879372 containerd[1544]: time="2025-11-24T06:46:03.879215213Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 24 06:46:03.879666 containerd[1544]: time="2025-11-24T06:46:03.879545173Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 24 06:46:03.879666 containerd[1544]: time="2025-11-24T06:46:03.879632658Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879674757Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879775055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879792228Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879802186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879812345Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879831622Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 24 06:46:03.879835 containerd[1544]: time="2025-11-24T06:46:03.879841821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879852631Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879873180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879882818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879893708Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879920328Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879932071Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879940466Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879949012Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879956376Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879965073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879980983Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.879996351Z" level=info msg="runtime interface created" Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.880001601Z" level=info msg="created NRI interface" Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.880008775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 24 06:46:03.880013 containerd[1544]: time="2025-11-24T06:46:03.880018333Z" level=info msg="Connect containerd service" Nov 24 06:46:03.880406 containerd[1544]: time="2025-11-24T06:46:03.880044642Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 24 06:46:03.880842 containerd[1544]: time="2025-11-24T06:46:03.880767301Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 06:46:03.953019 tar[1542]: linux-amd64/README.md Nov 24 06:46:03.957820 containerd[1544]: time="2025-11-24T06:46:03.957711894Z" level=info msg="Start subscribing containerd event" Nov 24 06:46:03.957892 containerd[1544]: time="2025-11-24T06:46:03.957797796Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 24 06:46:03.957892 containerd[1544]: time="2025-11-24T06:46:03.957878678Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 24 06:46:03.959946 containerd[1544]: time="2025-11-24T06:46:03.959713087Z" level=info msg="Start recovering state" Nov 24 06:46:03.960148 containerd[1544]: time="2025-11-24T06:46:03.960128188Z" level=info msg="Start event monitor" Nov 24 06:46:03.960230 containerd[1544]: time="2025-11-24T06:46:03.960215051Z" level=info msg="Start cni network conf syncer for default" Nov 24 06:46:03.960328 containerd[1544]: time="2025-11-24T06:46:03.960298628Z" level=info msg="Start streaming server" Nov 24 06:46:03.960328 containerd[1544]: time="2025-11-24T06:46:03.960319027Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 24 06:46:03.960328 containerd[1544]: time="2025-11-24T06:46:03.960330909Z" level=info msg="runtime interface starting up..." Nov 24 06:46:03.960328 containerd[1544]: time="2025-11-24T06:46:03.960339525Z" level=info msg="starting plugins..." Nov 24 06:46:03.960527 containerd[1544]: time="2025-11-24T06:46:03.960360384Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 24 06:46:03.960636 containerd[1544]: time="2025-11-24T06:46:03.960618269Z" level=info msg="containerd successfully booted in 0.104126s" Nov 24 06:46:03.960712 systemd[1]: Started containerd.service - containerd container runtime. Nov 24 06:46:03.969079 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 24 06:46:05.164645 systemd-networkd[1449]: eth0: Gained IPv6LL Nov 24 06:46:05.168227 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 24 06:46:05.170948 systemd[1]: Reached target network-online.target - Network is Online. Nov 24 06:46:05.174126 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 24 06:46:05.177119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:05.185030 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 24 06:46:05.205127 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 24 06:46:05.205419 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 24 06:46:05.208036 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 24 06:46:05.212721 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 24 06:46:05.903180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:05.905646 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 24 06:46:05.908518 systemd[1]: Startup finished in 2.853s (kernel) + 5.554s (initrd) + 4.683s (userspace) = 13.091s. Nov 24 06:46:05.957843 (kubelet)[1646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:46:06.314473 kubelet[1646]: E1124 06:46:06.314345 1646 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:46:06.318166 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:46:06.318358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:46:06.318715 systemd[1]: kubelet.service: Consumed 925ms CPU time, 255.9M memory peak. Nov 24 06:46:09.072248 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 24 06:46:09.073379 systemd[1]: Started sshd@0-10.0.0.32:22-10.0.0.1:36570.service - OpenSSH per-connection server daemon (10.0.0.1:36570). Nov 24 06:46:09.192832 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 36570 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:09.194866 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:09.202111 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 24 06:46:09.203349 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 24 06:46:09.209693 systemd-logind[1532]: New session 1 of user core. Nov 24 06:46:09.222474 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 24 06:46:09.225528 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 24 06:46:09.254047 (systemd)[1664]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 24 06:46:09.256731 systemd-logind[1532]: New session c1 of user core. Nov 24 06:46:09.420293 systemd[1664]: Queued start job for default target default.target. Nov 24 06:46:09.437751 systemd[1664]: Created slice app.slice - User Application Slice. Nov 24 06:46:09.437779 systemd[1664]: Reached target paths.target - Paths. Nov 24 06:46:09.437819 systemd[1664]: Reached target timers.target - Timers. Nov 24 06:46:09.439348 systemd[1664]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 24 06:46:09.450694 systemd[1664]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 24 06:46:09.450856 systemd[1664]: Reached target sockets.target - Sockets. Nov 24 06:46:09.450908 systemd[1664]: Reached target basic.target - Basic System. Nov 24 06:46:09.450962 systemd[1664]: Reached target default.target - Main User Target. Nov 24 06:46:09.451003 systemd[1664]: Startup finished in 186ms. Nov 24 06:46:09.451146 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 24 06:46:09.452748 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 24 06:46:09.522932 systemd[1]: Started sshd@1-10.0.0.32:22-10.0.0.1:35942.service - OpenSSH per-connection server daemon (10.0.0.1:35942). Nov 24 06:46:09.573549 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 35942 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:09.575096 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:09.580069 systemd-logind[1532]: New session 2 of user core. Nov 24 06:46:09.594655 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 24 06:46:09.648243 sshd[1678]: Connection closed by 10.0.0.1 port 35942 Nov 24 06:46:09.648557 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:09.655694 systemd[1]: sshd@1-10.0.0.32:22-10.0.0.1:35942.service: Deactivated successfully. Nov 24 06:46:09.657294 systemd[1]: session-2.scope: Deactivated successfully. Nov 24 06:46:09.658096 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Nov 24 06:46:09.660376 systemd[1]: Started sshd@2-10.0.0.32:22-10.0.0.1:35946.service - OpenSSH per-connection server daemon (10.0.0.1:35946). Nov 24 06:46:09.661229 systemd-logind[1532]: Removed session 2. Nov 24 06:46:09.705187 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 35946 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:09.706398 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:09.710399 systemd-logind[1532]: New session 3 of user core. Nov 24 06:46:09.719556 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 24 06:46:09.768752 sshd[1687]: Connection closed by 10.0.0.1 port 35946 Nov 24 06:46:09.769031 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:09.778573 systemd[1]: sshd@2-10.0.0.32:22-10.0.0.1:35946.service: Deactivated successfully. Nov 24 06:46:09.780105 systemd[1]: session-3.scope: Deactivated successfully. Nov 24 06:46:09.780738 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Nov 24 06:46:09.782917 systemd[1]: Started sshd@3-10.0.0.32:22-10.0.0.1:35948.service - OpenSSH per-connection server daemon (10.0.0.1:35948). Nov 24 06:46:09.783480 systemd-logind[1532]: Removed session 3. Nov 24 06:46:09.826555 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 35948 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:09.827744 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:09.831613 systemd-logind[1532]: New session 4 of user core. Nov 24 06:46:09.849556 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 24 06:46:09.901122 sshd[1696]: Connection closed by 10.0.0.1 port 35948 Nov 24 06:46:09.901475 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:09.911930 systemd[1]: sshd@3-10.0.0.32:22-10.0.0.1:35948.service: Deactivated successfully. Nov 24 06:46:09.913603 systemd[1]: session-4.scope: Deactivated successfully. Nov 24 06:46:09.914353 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Nov 24 06:46:09.916698 systemd[1]: Started sshd@4-10.0.0.32:22-10.0.0.1:35954.service - OpenSSH per-connection server daemon (10.0.0.1:35954). Nov 24 06:46:09.917259 systemd-logind[1532]: Removed session 4. Nov 24 06:46:09.962971 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 35954 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:09.964502 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:09.968900 systemd-logind[1532]: New session 5 of user core. Nov 24 06:46:09.979559 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 24 06:46:10.037339 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 24 06:46:10.037677 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:10.053024 sudo[1706]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:10.054909 sshd[1705]: Connection closed by 10.0.0.1 port 35954 Nov 24 06:46:10.055287 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:10.067109 systemd[1]: sshd@4-10.0.0.32:22-10.0.0.1:35954.service: Deactivated successfully. Nov 24 06:46:10.068890 systemd[1]: session-5.scope: Deactivated successfully. Nov 24 06:46:10.069624 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Nov 24 06:46:10.072427 systemd[1]: Started sshd@5-10.0.0.32:22-10.0.0.1:35966.service - OpenSSH per-connection server daemon (10.0.0.1:35966). Nov 24 06:46:10.073039 systemd-logind[1532]: Removed session 5. Nov 24 06:46:10.115222 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 35966 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:10.116497 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:10.120765 systemd-logind[1532]: New session 6 of user core. Nov 24 06:46:10.130608 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 24 06:46:10.184733 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 24 06:46:10.185024 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:10.191958 sudo[1717]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:10.198094 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 24 06:46:10.198396 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:10.208160 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 24 06:46:10.257763 augenrules[1739]: No rules Nov 24 06:46:10.260013 systemd[1]: audit-rules.service: Deactivated successfully. Nov 24 06:46:10.260333 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 24 06:46:10.261635 sudo[1716]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:10.263509 sshd[1715]: Connection closed by 10.0.0.1 port 35966 Nov 24 06:46:10.264003 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:10.273606 systemd[1]: sshd@5-10.0.0.32:22-10.0.0.1:35966.service: Deactivated successfully. Nov 24 06:46:10.275695 systemd[1]: session-6.scope: Deactivated successfully. Nov 24 06:46:10.276434 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Nov 24 06:46:10.279463 systemd[1]: Started sshd@6-10.0.0.32:22-10.0.0.1:35978.service - OpenSSH per-connection server daemon (10.0.0.1:35978). Nov 24 06:46:10.280322 systemd-logind[1532]: Removed session 6. Nov 24 06:46:10.336021 sshd[1748]: Accepted publickey for core from 10.0.0.1 port 35978 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:46:10.337623 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:46:10.342222 systemd-logind[1532]: New session 7 of user core. Nov 24 06:46:10.351597 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 24 06:46:10.407883 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 24 06:46:10.408667 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 24 06:46:10.720642 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 24 06:46:10.743752 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 24 06:46:10.979829 dockerd[1774]: time="2025-11-24T06:46:10.979691382Z" level=info msg="Starting up" Nov 24 06:46:10.980586 dockerd[1774]: time="2025-11-24T06:46:10.980564244Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 24 06:46:10.991848 dockerd[1774]: time="2025-11-24T06:46:10.991791014Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 24 06:46:11.158218 dockerd[1774]: time="2025-11-24T06:46:11.158146031Z" level=info msg="Loading containers: start." Nov 24 06:46:11.168460 kernel: Initializing XFRM netlink socket Nov 24 06:46:11.450710 systemd-networkd[1449]: docker0: Link UP Nov 24 06:46:11.456637 dockerd[1774]: time="2025-11-24T06:46:11.456589501Z" level=info msg="Loading containers: done." Nov 24 06:46:11.473788 dockerd[1774]: time="2025-11-24T06:46:11.473744398Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 24 06:46:11.473930 dockerd[1774]: time="2025-11-24T06:46:11.473831092Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 24 06:46:11.473930 dockerd[1774]: time="2025-11-24T06:46:11.473901283Z" level=info msg="Initializing buildkit" Nov 24 06:46:11.505926 dockerd[1774]: time="2025-11-24T06:46:11.505854978Z" level=info msg="Completed buildkit initialization" Nov 24 06:46:11.513566 dockerd[1774]: time="2025-11-24T06:46:11.513522315Z" level=info msg="Daemon has completed initialization" Nov 24 06:46:11.513709 dockerd[1774]: time="2025-11-24T06:46:11.513604470Z" level=info msg="API listen on /run/docker.sock" Nov 24 06:46:11.513840 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 24 06:46:12.053591 containerd[1544]: time="2025-11-24T06:46:12.053548978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\"" Nov 24 06:46:12.720555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146003578.mount: Deactivated successfully. Nov 24 06:46:13.704357 containerd[1544]: time="2025-11-24T06:46:13.704293176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:13.705306 containerd[1544]: time="2025-11-24T06:46:13.705285132Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.2: active requests=0, bytes read=27063531" Nov 24 06:46:13.706377 containerd[1544]: time="2025-11-24T06:46:13.706349152Z" level=info msg="ImageCreate event name:\"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:13.709345 containerd[1544]: time="2025-11-24T06:46:13.709315911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:13.710210 containerd[1544]: time="2025-11-24T06:46:13.710163815Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.2\" with image id \"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077\", size \"27060130\" in 1.656579421s" Nov 24 06:46:13.710252 containerd[1544]: time="2025-11-24T06:46:13.710211735Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.2\" returns image reference \"sha256:a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85\"" Nov 24 06:46:13.710731 containerd[1544]: time="2025-11-24T06:46:13.710698330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\"" Nov 24 06:46:14.769492 containerd[1544]: time="2025-11-24T06:46:14.769417781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:14.770348 containerd[1544]: time="2025-11-24T06:46:14.770283168Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.2: active requests=0, bytes read=21161621" Nov 24 06:46:14.771662 containerd[1544]: time="2025-11-24T06:46:14.771611837Z" level=info msg="ImageCreate event name:\"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:14.774210 containerd[1544]: time="2025-11-24T06:46:14.774175187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:14.775123 containerd[1544]: time="2025-11-24T06:46:14.775074498Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.2\" with image id \"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb\", size \"22818657\" in 1.064346311s" Nov 24 06:46:14.775123 containerd[1544]: time="2025-11-24T06:46:14.775106337Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.2\" returns image reference \"sha256:01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8\"" Nov 24 06:46:14.775569 containerd[1544]: time="2025-11-24T06:46:14.775525325Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\"" Nov 24 06:46:15.891914 containerd[1544]: time="2025-11-24T06:46:15.891840717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:15.892536 containerd[1544]: time="2025-11-24T06:46:15.892506378Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.2: active requests=0, bytes read=15725218" Nov 24 06:46:15.893653 containerd[1544]: time="2025-11-24T06:46:15.893620624Z" level=info msg="ImageCreate event name:\"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:15.896209 containerd[1544]: time="2025-11-24T06:46:15.896167192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:15.896964 containerd[1544]: time="2025-11-24T06:46:15.896918364Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.2\" with image id \"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6\", size \"17382272\" in 1.12136699s" Nov 24 06:46:15.897009 containerd[1544]: time="2025-11-24T06:46:15.896965593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.2\" returns image reference \"sha256:88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952\"" Nov 24 06:46:15.897320 containerd[1544]: time="2025-11-24T06:46:15.897304050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\"" Nov 24 06:46:16.468932 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 24 06:46:16.471770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:16.984693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:16.992717 (kubelet)[2071]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 24 06:46:17.032675 kubelet[2071]: E1124 06:46:17.032627 2071 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 24 06:46:17.039235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 24 06:46:17.039424 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 24 06:46:17.039797 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.5M memory peak. Nov 24 06:46:17.264767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2126657138.mount: Deactivated successfully. Nov 24 06:46:17.471025 containerd[1544]: time="2025-11-24T06:46:17.470966347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:17.471745 containerd[1544]: time="2025-11-24T06:46:17.471712369Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.2: active requests=0, bytes read=25964463" Nov 24 06:46:17.472946 containerd[1544]: time="2025-11-24T06:46:17.472915031Z" level=info msg="ImageCreate event name:\"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:17.474834 containerd[1544]: time="2025-11-24T06:46:17.474782522Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:17.475232 containerd[1544]: time="2025-11-24T06:46:17.475189798Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.2\" with image id \"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45\", repo tag \"registry.k8s.io/kube-proxy:v1.34.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5\", size \"25963482\" in 1.577865491s" Nov 24 06:46:17.475260 containerd[1544]: time="2025-11-24T06:46:17.475232268Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.2\" returns image reference \"sha256:8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45\"" Nov 24 06:46:17.475656 containerd[1544]: time="2025-11-24T06:46:17.475635176Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 24 06:46:18.029846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034902269.mount: Deactivated successfully. Nov 24 06:46:18.819602 containerd[1544]: time="2025-11-24T06:46:18.819554043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:18.820518 containerd[1544]: time="2025-11-24T06:46:18.820463443Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 24 06:46:18.837147 containerd[1544]: time="2025-11-24T06:46:18.837111988Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:18.841981 containerd[1544]: time="2025-11-24T06:46:18.841947551Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:18.843121 containerd[1544]: time="2025-11-24T06:46:18.843077746Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.367417964s" Nov 24 06:46:18.843121 containerd[1544]: time="2025-11-24T06:46:18.843117481Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 24 06:46:18.843565 containerd[1544]: time="2025-11-24T06:46:18.843510830Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 24 06:46:19.351863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3273567376.mount: Deactivated successfully. Nov 24 06:46:19.359550 containerd[1544]: time="2025-11-24T06:46:19.359499884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:19.360677 containerd[1544]: time="2025-11-24T06:46:19.360654805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 24 06:46:19.362085 containerd[1544]: time="2025-11-24T06:46:19.362052003Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:19.364269 containerd[1544]: time="2025-11-24T06:46:19.364234246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:19.364862 containerd[1544]: time="2025-11-24T06:46:19.364825217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 521.291123ms" Nov 24 06:46:19.364903 containerd[1544]: time="2025-11-24T06:46:19.364858911Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 24 06:46:19.365357 containerd[1544]: time="2025-11-24T06:46:19.365322593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 24 06:46:20.119874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890958847.mount: Deactivated successfully. Nov 24 06:46:22.372515 containerd[1544]: time="2025-11-24T06:46:22.372431711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:22.373504 containerd[1544]: time="2025-11-24T06:46:22.373468330Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166814" Nov 24 06:46:22.374683 containerd[1544]: time="2025-11-24T06:46:22.374622841Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:22.377368 containerd[1544]: time="2025-11-24T06:46:22.377324581Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:22.378415 containerd[1544]: time="2025-11-24T06:46:22.378365900Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.013010335s" Nov 24 06:46:22.378415 containerd[1544]: time="2025-11-24T06:46:22.378410694Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 24 06:46:26.109164 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:26.109425 systemd[1]: kubelet.service: Consumed 215ms CPU time, 110.5M memory peak. Nov 24 06:46:26.111596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:26.134917 systemd[1]: Reload requested from client PID 2225 ('systemctl') (unit session-7.scope)... Nov 24 06:46:26.134931 systemd[1]: Reloading... Nov 24 06:46:26.217474 zram_generator::config[2271]: No configuration found. Nov 24 06:46:26.467600 systemd[1]: Reloading finished in 332 ms. Nov 24 06:46:26.521111 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 24 06:46:26.521209 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 24 06:46:26.521538 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:26.521579 systemd[1]: kubelet.service: Consumed 147ms CPU time, 98.1M memory peak. Nov 24 06:46:26.523069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:26.700390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:26.704561 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 06:46:26.739570 kubelet[2316]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 06:46:26.739570 kubelet[2316]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:46:26.739570 kubelet[2316]: I1124 06:46:26.739547 2316 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 06:46:27.096390 kubelet[2316]: I1124 06:46:27.096268 2316 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 24 06:46:27.096390 kubelet[2316]: I1124 06:46:27.096298 2316 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 06:46:27.098758 kubelet[2316]: I1124 06:46:27.098720 2316 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 24 06:46:27.098758 kubelet[2316]: I1124 06:46:27.098745 2316 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 06:46:27.099023 kubelet[2316]: I1124 06:46:27.098997 2316 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 06:46:27.334556 kubelet[2316]: E1124 06:46:27.334519 2316 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 24 06:46:27.339204 kubelet[2316]: I1124 06:46:27.339155 2316 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:46:27.342934 kubelet[2316]: I1124 06:46:27.342903 2316 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 06:46:27.348350 kubelet[2316]: I1124 06:46:27.348258 2316 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 24 06:46:27.348552 kubelet[2316]: I1124 06:46:27.348518 2316 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 06:46:27.348707 kubelet[2316]: I1124 06:46:27.348542 2316 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 06:46:27.348707 kubelet[2316]: I1124 06:46:27.348701 2316 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 06:46:27.348868 kubelet[2316]: I1124 06:46:27.348711 2316 container_manager_linux.go:306] "Creating device plugin manager" Nov 24 06:46:27.348868 kubelet[2316]: I1124 06:46:27.348813 2316 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 24 06:46:27.352194 kubelet[2316]: I1124 06:46:27.352157 2316 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:27.352342 kubelet[2316]: I1124 06:46:27.352316 2316 kubelet.go:475] "Attempting to sync node with API server" Nov 24 06:46:27.352342 kubelet[2316]: I1124 06:46:27.352330 2316 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 06:46:27.352342 kubelet[2316]: I1124 06:46:27.352349 2316 kubelet.go:387] "Adding apiserver pod source" Nov 24 06:46:27.352948 kubelet[2316]: E1124 06:46:27.352906 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 06:46:27.353696 kubelet[2316]: I1124 06:46:27.353662 2316 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 06:46:27.354269 kubelet[2316]: E1124 06:46:27.354223 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 06:46:27.357473 kubelet[2316]: I1124 06:46:27.356888 2316 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 06:46:27.357606 kubelet[2316]: I1124 06:46:27.357569 2316 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 06:46:27.357657 kubelet[2316]: I1124 06:46:27.357623 2316 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 24 06:46:27.357741 kubelet[2316]: W1124 06:46:27.357718 2316 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 24 06:46:27.362149 kubelet[2316]: I1124 06:46:27.362132 2316 server.go:1262] "Started kubelet" Nov 24 06:46:27.362330 kubelet[2316]: I1124 06:46:27.362188 2316 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 06:46:27.362857 kubelet[2316]: I1124 06:46:27.362807 2316 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 06:46:27.362950 kubelet[2316]: I1124 06:46:27.362934 2316 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 24 06:46:27.367324 kubelet[2316]: I1124 06:46:27.367172 2316 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 06:46:27.367324 kubelet[2316]: I1124 06:46:27.367268 2316 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 06:46:27.369544 kubelet[2316]: I1124 06:46:27.368128 2316 server.go:310] "Adding debug handlers to kubelet server" Nov 24 06:46:27.369544 kubelet[2316]: I1124 06:46:27.368542 2316 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 06:46:27.372701 kubelet[2316]: E1124 06:46:27.372684 2316 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:27.372793 kubelet[2316]: I1124 06:46:27.372782 2316 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 24 06:46:27.373071 kubelet[2316]: I1124 06:46:27.373043 2316 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 06:46:27.373127 kubelet[2316]: I1124 06:46:27.373103 2316 reconciler.go:29] "Reconciler: start to sync state" Nov 24 06:46:27.373567 kubelet[2316]: E1124 06:46:27.373540 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 06:46:27.373740 kubelet[2316]: E1124 06:46:27.372396 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ade727c4e1093 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 06:46:27.362099347 +0000 UTC m=+0.654089253,LastTimestamp:2025-11-24 06:46:27.362099347 +0000 UTC m=+0.654089253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 06:46:27.374160 kubelet[2316]: E1124 06:46:27.374135 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="200ms" Nov 24 06:46:27.374969 kubelet[2316]: I1124 06:46:27.374946 2316 factory.go:223] Registration of the systemd container factory successfully Nov 24 06:46:27.375178 kubelet[2316]: I1124 06:46:27.375051 2316 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 06:46:27.375178 kubelet[2316]: E1124 06:46:27.375104 2316 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 06:46:27.376032 kubelet[2316]: I1124 06:46:27.376010 2316 factory.go:223] Registration of the containerd container factory successfully Nov 24 06:46:27.387742 kubelet[2316]: I1124 06:46:27.387703 2316 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 06:46:27.387742 kubelet[2316]: I1124 06:46:27.387730 2316 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 06:46:27.387742 kubelet[2316]: I1124 06:46:27.387746 2316 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:27.389495 kubelet[2316]: I1124 06:46:27.389458 2316 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 24 06:46:27.390814 kubelet[2316]: I1124 06:46:27.390788 2316 policy_none.go:49] "None policy: Start" Nov 24 06:46:27.390814 kubelet[2316]: I1124 06:46:27.390811 2316 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 24 06:46:27.390950 kubelet[2316]: I1124 06:46:27.390823 2316 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 24 06:46:27.391684 kubelet[2316]: I1124 06:46:27.391662 2316 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 24 06:46:27.391718 kubelet[2316]: I1124 06:46:27.391689 2316 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 24 06:46:27.391718 kubelet[2316]: I1124 06:46:27.391715 2316 kubelet.go:2427] "Starting kubelet main sync loop" Nov 24 06:46:27.391781 kubelet[2316]: E1124 06:46:27.391765 2316 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 06:46:27.392468 kubelet[2316]: E1124 06:46:27.392402 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 06:46:27.393209 kubelet[2316]: I1124 06:46:27.393094 2316 policy_none.go:47] "Start" Nov 24 06:46:27.398255 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 24 06:46:27.417511 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 24 06:46:27.420795 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 24 06:46:27.440358 kubelet[2316]: E1124 06:46:27.440307 2316 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 06:46:27.440608 kubelet[2316]: I1124 06:46:27.440541 2316 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 06:46:27.440608 kubelet[2316]: I1124 06:46:27.440558 2316 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 06:46:27.440824 kubelet[2316]: I1124 06:46:27.440801 2316 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 06:46:27.441930 kubelet[2316]: E1124 06:46:27.441907 2316 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 06:46:27.441969 kubelet[2316]: E1124 06:46:27.441961 2316 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 24 06:46:27.502885 systemd[1]: Created slice kubepods-burstable-poda7b57e917ca4bf9e8cf873e0a0ba9cb5.slice - libcontainer container kubepods-burstable-poda7b57e917ca4bf9e8cf873e0a0ba9cb5.slice. Nov 24 06:46:27.523939 kubelet[2316]: E1124 06:46:27.523894 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:27.527649 systemd[1]: Created slice kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice - libcontainer container kubepods-burstable-pod41694572f76b3db8403039f40dd5ea25.slice. Nov 24 06:46:27.529319 kubelet[2316]: E1124 06:46:27.529302 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:27.530910 systemd[1]: Created slice kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice - libcontainer container kubepods-burstable-podf7d0af91d0c9a9742236c44baa5e2751.slice. Nov 24 06:46:27.532547 kubelet[2316]: E1124 06:46:27.532519 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:27.542538 kubelet[2316]: I1124 06:46:27.542508 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:27.542948 kubelet[2316]: E1124 06:46:27.542913 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Nov 24 06:46:27.575519 kubelet[2316]: E1124 06:46:27.575487 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="400ms" Nov 24 06:46:27.674911 kubelet[2316]: I1124 06:46:27.674799 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7b57e917ca4bf9e8cf873e0a0ba9cb5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7b57e917ca4bf9e8cf873e0a0ba9cb5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.674911 kubelet[2316]: I1124 06:46:27.674835 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.675106 kubelet[2316]: I1124 06:46:27.674958 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.675106 kubelet[2316]: I1124 06:46:27.674979 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.675106 kubelet[2316]: I1124 06:46:27.674996 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.675106 kubelet[2316]: I1124 06:46:27.675018 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7b57e917ca4bf9e8cf873e0a0ba9cb5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7b57e917ca4bf9e8cf873e0a0ba9cb5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.675106 kubelet[2316]: I1124 06:46:27.675032 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7b57e917ca4bf9e8cf873e0a0ba9cb5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7b57e917ca4bf9e8cf873e0a0ba9cb5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:27.675228 kubelet[2316]: I1124 06:46:27.675048 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:27.675228 kubelet[2316]: I1124 06:46:27.675062 2316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:27.744421 kubelet[2316]: I1124 06:46:27.744396 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:27.744802 kubelet[2316]: E1124 06:46:27.744661 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Nov 24 06:46:27.831810 containerd[1544]: time="2025-11-24T06:46:27.831764526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7b57e917ca4bf9e8cf873e0a0ba9cb5,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:27.833136 containerd[1544]: time="2025-11-24T06:46:27.833104996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:27.836029 containerd[1544]: time="2025-11-24T06:46:27.835978319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:27.976889 kubelet[2316]: E1124 06:46:27.976823 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="800ms" Nov 24 06:46:28.146448 kubelet[2316]: I1124 06:46:28.146395 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:28.146759 kubelet[2316]: E1124 06:46:28.146726 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Nov 24 06:46:28.372334 kubelet[2316]: E1124 06:46:28.372191 2316 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.32:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.32:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.187ade727c4e1093 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-24 06:46:27.362099347 +0000 UTC m=+0.654089253,LastTimestamp:2025-11-24 06:46:27.362099347 +0000 UTC m=+0.654089253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 24 06:46:28.397043 kubelet[2316]: E1124 06:46:28.397006 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 24 06:46:28.532485 kubelet[2316]: E1124 06:46:28.532426 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 24 06:46:28.545747 kubelet[2316]: E1124 06:46:28.545720 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 24 06:46:28.602509 kubelet[2316]: E1124 06:46:28.602477 2316 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.32:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 24 06:46:28.653624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2601019410.mount: Deactivated successfully. Nov 24 06:46:28.777951 kubelet[2316]: E1124 06:46:28.777904 2316 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.32:6443: connect: connection refused" interval="1.6s" Nov 24 06:46:28.831597 containerd[1544]: time="2025-11-24T06:46:28.831543477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:28.835185 containerd[1544]: time="2025-11-24T06:46:28.835159697Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 24 06:46:28.836241 containerd[1544]: time="2025-11-24T06:46:28.836202918Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:28.838094 containerd[1544]: time="2025-11-24T06:46:28.838070320Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:28.839155 containerd[1544]: time="2025-11-24T06:46:28.839124272Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 24 06:46:28.840412 containerd[1544]: time="2025-11-24T06:46:28.840381736Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:28.841691 containerd[1544]: time="2025-11-24T06:46:28.841664097Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 24 06:46:28.843899 containerd[1544]: time="2025-11-24T06:46:28.842639822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 24 06:46:28.843899 containerd[1544]: time="2025-11-24T06:46:28.843304221Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.00736173s" Nov 24 06:46:28.845686 containerd[1544]: time="2025-11-24T06:46:28.845650323Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.006839938s" Nov 24 06:46:28.846190 containerd[1544]: time="2025-11-24T06:46:28.846156946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.012352855s" Nov 24 06:46:28.879582 containerd[1544]: time="2025-11-24T06:46:28.879499501Z" level=info msg="connecting to shim afa43ccd6e95d1d2a4e6fb1ddf89e7bbb0b865b8eb40d3de7b63e0d0c9f17bd8" address="unix:///run/containerd/s/3d587641db1d66646f6f36961d18f3f512f291953a8e2851f01d7318ac4e7aa8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:28.882332 containerd[1544]: time="2025-11-24T06:46:28.882284979Z" level=info msg="connecting to shim f466fc11b9b7ded89949d0167dc58652fb97f7311d2b5861bc43afa21f8f02e4" address="unix:///run/containerd/s/3814f58f1354f4d55c9b18796885b8a43cd033ee2a71da27f3bbfb9234784b3a" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:28.887760 containerd[1544]: time="2025-11-24T06:46:28.887719438Z" level=info msg="connecting to shim adc8e6503882242ec5e526258c449d4a892f1203dee85c21e0110bde1ee6ebe8" address="unix:///run/containerd/s/04ebbfba68a93ae695660111cba7649071662a988a6a79d1850555ea916645c8" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:28.904578 systemd[1]: Started cri-containerd-afa43ccd6e95d1d2a4e6fb1ddf89e7bbb0b865b8eb40d3de7b63e0d0c9f17bd8.scope - libcontainer container afa43ccd6e95d1d2a4e6fb1ddf89e7bbb0b865b8eb40d3de7b63e0d0c9f17bd8. Nov 24 06:46:28.908809 systemd[1]: Started cri-containerd-f466fc11b9b7ded89949d0167dc58652fb97f7311d2b5861bc43afa21f8f02e4.scope - libcontainer container f466fc11b9b7ded89949d0167dc58652fb97f7311d2b5861bc43afa21f8f02e4. Nov 24 06:46:28.912539 systemd[1]: Started cri-containerd-adc8e6503882242ec5e526258c449d4a892f1203dee85c21e0110bde1ee6ebe8.scope - libcontainer container adc8e6503882242ec5e526258c449d4a892f1203dee85c21e0110bde1ee6ebe8. Nov 24 06:46:28.951498 kubelet[2316]: I1124 06:46:28.951470 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:28.952133 kubelet[2316]: E1124 06:46:28.952109 2316 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.32:6443/api/v1/nodes\": dial tcp 10.0.0.32:6443: connect: connection refused" node="localhost" Nov 24 06:46:28.961830 containerd[1544]: time="2025-11-24T06:46:28.961788324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7d0af91d0c9a9742236c44baa5e2751,Namespace:kube-system,Attempt:0,} returns sandbox id \"f466fc11b9b7ded89949d0167dc58652fb97f7311d2b5861bc43afa21f8f02e4\"" Nov 24 06:46:28.967763 containerd[1544]: time="2025-11-24T06:46:28.967721190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:41694572f76b3db8403039f40dd5ea25,Namespace:kube-system,Attempt:0,} returns sandbox id \"afa43ccd6e95d1d2a4e6fb1ddf89e7bbb0b865b8eb40d3de7b63e0d0c9f17bd8\"" Nov 24 06:46:28.968196 containerd[1544]: time="2025-11-24T06:46:28.968175324Z" level=info msg="CreateContainer within sandbox \"f466fc11b9b7ded89949d0167dc58652fb97f7311d2b5861bc43afa21f8f02e4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 24 06:46:28.968865 containerd[1544]: time="2025-11-24T06:46:28.968824454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7b57e917ca4bf9e8cf873e0a0ba9cb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"adc8e6503882242ec5e526258c449d4a892f1203dee85c21e0110bde1ee6ebe8\"" Nov 24 06:46:28.973024 containerd[1544]: time="2025-11-24T06:46:28.972989316Z" level=info msg="CreateContainer within sandbox \"adc8e6503882242ec5e526258c449d4a892f1203dee85c21e0110bde1ee6ebe8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 24 06:46:28.982803 containerd[1544]: time="2025-11-24T06:46:28.982755780Z" level=info msg="CreateContainer within sandbox \"afa43ccd6e95d1d2a4e6fb1ddf89e7bbb0b865b8eb40d3de7b63e0d0c9f17bd8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 24 06:46:28.988099 containerd[1544]: time="2025-11-24T06:46:28.988074431Z" level=info msg="Container 2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:28.990667 containerd[1544]: time="2025-11-24T06:46:28.990646768Z" level=info msg="Container 06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:29.001761 containerd[1544]: time="2025-11-24T06:46:29.001723987Z" level=info msg="CreateContainer within sandbox \"f466fc11b9b7ded89949d0167dc58652fb97f7311d2b5861bc43afa21f8f02e4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5\"" Nov 24 06:46:29.002254 containerd[1544]: time="2025-11-24T06:46:29.002196796Z" level=info msg="StartContainer for \"06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5\"" Nov 24 06:46:29.003001 containerd[1544]: time="2025-11-24T06:46:29.002975260Z" level=info msg="CreateContainer within sandbox \"adc8e6503882242ec5e526258c449d4a892f1203dee85c21e0110bde1ee6ebe8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849\"" Nov 24 06:46:29.003248 containerd[1544]: time="2025-11-24T06:46:29.003222375Z" level=info msg="connecting to shim 06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5" address="unix:///run/containerd/s/3814f58f1354f4d55c9b18796885b8a43cd033ee2a71da27f3bbfb9234784b3a" protocol=ttrpc version=3 Nov 24 06:46:29.003846 containerd[1544]: time="2025-11-24T06:46:29.003814268Z" level=info msg="StartContainer for \"2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849\"" Nov 24 06:46:29.004590 containerd[1544]: time="2025-11-24T06:46:29.004559840Z" level=info msg="Container 61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:29.004817 containerd[1544]: time="2025-11-24T06:46:29.004798088Z" level=info msg="connecting to shim 2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849" address="unix:///run/containerd/s/04ebbfba68a93ae695660111cba7649071662a988a6a79d1850555ea916645c8" protocol=ttrpc version=3 Nov 24 06:46:29.012978 containerd[1544]: time="2025-11-24T06:46:29.012940438Z" level=info msg="CreateContainer within sandbox \"afa43ccd6e95d1d2a4e6fb1ddf89e7bbb0b865b8eb40d3de7b63e0d0c9f17bd8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a\"" Nov 24 06:46:29.013606 containerd[1544]: time="2025-11-24T06:46:29.013491374Z" level=info msg="StartContainer for \"61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a\"" Nov 24 06:46:29.014534 containerd[1544]: time="2025-11-24T06:46:29.014514468Z" level=info msg="connecting to shim 61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a" address="unix:///run/containerd/s/3d587641db1d66646f6f36961d18f3f512f291953a8e2851f01d7318ac4e7aa8" protocol=ttrpc version=3 Nov 24 06:46:29.023571 systemd[1]: Started cri-containerd-2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849.scope - libcontainer container 2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849. Nov 24 06:46:29.026643 systemd[1]: Started cri-containerd-06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5.scope - libcontainer container 06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5. Nov 24 06:46:29.036653 systemd[1]: Started cri-containerd-61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a.scope - libcontainer container 61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a. Nov 24 06:46:29.085888 containerd[1544]: time="2025-11-24T06:46:29.085835787Z" level=info msg="StartContainer for \"06a05fcaa081594a9450f28b649f5c367797843b69a0491048c015dd3eab2bd5\" returns successfully" Nov 24 06:46:29.088107 containerd[1544]: time="2025-11-24T06:46:29.088082522Z" level=info msg="StartContainer for \"2a3bc678d407e628672c838da88184abad2e5f2e66446b615232eae1c102f849\" returns successfully" Nov 24 06:46:29.104713 containerd[1544]: time="2025-11-24T06:46:29.104498289Z" level=info msg="StartContainer for \"61ad248ec1e2592b8f1a8279053da4d5a91e9ba13e463cafa3420a9e178b6c9a\" returns successfully" Nov 24 06:46:29.401363 kubelet[2316]: E1124 06:46:29.401330 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:29.403399 kubelet[2316]: E1124 06:46:29.403372 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:29.408026 kubelet[2316]: E1124 06:46:29.407996 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:30.380814 kubelet[2316]: E1124 06:46:30.380776 2316 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 24 06:46:30.409417 kubelet[2316]: E1124 06:46:30.409389 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:30.409543 kubelet[2316]: E1124 06:46:30.409493 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:30.409646 kubelet[2316]: E1124 06:46:30.409621 2316 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 24 06:46:30.493929 kubelet[2316]: E1124 06:46:30.493900 2316 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 24 06:46:30.553712 kubelet[2316]: I1124 06:46:30.553690 2316 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:30.560851 kubelet[2316]: I1124 06:46:30.560819 2316 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 06:46:30.560851 kubelet[2316]: E1124 06:46:30.560846 2316 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 24 06:46:30.566578 kubelet[2316]: E1124 06:46:30.566555 2316 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:30.667411 kubelet[2316]: E1124 06:46:30.667335 2316 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:30.774005 kubelet[2316]: I1124 06:46:30.773966 2316 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:30.777793 kubelet[2316]: E1124 06:46:30.777766 2316 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:30.777793 kubelet[2316]: I1124 06:46:30.777787 2316 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:30.779193 kubelet[2316]: E1124 06:46:30.779168 2316 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:30.779193 kubelet[2316]: I1124 06:46:30.779186 2316 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:30.780223 kubelet[2316]: E1124 06:46:30.780204 2316 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:31.355682 kubelet[2316]: I1124 06:46:31.355631 2316 apiserver.go:52] "Watching apiserver" Nov 24 06:46:31.373512 kubelet[2316]: I1124 06:46:31.373483 2316 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 06:46:32.050628 kubelet[2316]: I1124 06:46:32.050591 2316 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:32.146247 systemd[1]: Reload requested from client PID 2601 ('systemctl') (unit session-7.scope)... Nov 24 06:46:32.146262 systemd[1]: Reloading... Nov 24 06:46:32.216523 zram_generator::config[2647]: No configuration found. Nov 24 06:46:32.437428 systemd[1]: Reloading finished in 290 ms. Nov 24 06:46:32.468952 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:32.489567 systemd[1]: kubelet.service: Deactivated successfully. Nov 24 06:46:32.489852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:32.489898 systemd[1]: kubelet.service: Consumed 850ms CPU time, 126.5M memory peak. Nov 24 06:46:32.491524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 24 06:46:32.694260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 24 06:46:32.701774 (kubelet)[2689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 24 06:46:32.738604 kubelet[2689]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 24 06:46:32.738604 kubelet[2689]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 24 06:46:32.738604 kubelet[2689]: I1124 06:46:32.737867 2689 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 24 06:46:32.744111 kubelet[2689]: I1124 06:46:32.744082 2689 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 24 06:46:32.744111 kubelet[2689]: I1124 06:46:32.744100 2689 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 24 06:46:32.744181 kubelet[2689]: I1124 06:46:32.744123 2689 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 24 06:46:32.744181 kubelet[2689]: I1124 06:46:32.744134 2689 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 24 06:46:32.744322 kubelet[2689]: I1124 06:46:32.744300 2689 server.go:956] "Client rotation is on, will bootstrap in background" Nov 24 06:46:32.745336 kubelet[2689]: I1124 06:46:32.745319 2689 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 24 06:46:32.747274 kubelet[2689]: I1124 06:46:32.747251 2689 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 24 06:46:32.751957 kubelet[2689]: I1124 06:46:32.751929 2689 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 24 06:46:32.757990 kubelet[2689]: I1124 06:46:32.757960 2689 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 24 06:46:32.758196 kubelet[2689]: I1124 06:46:32.758162 2689 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 24 06:46:32.758333 kubelet[2689]: I1124 06:46:32.758185 2689 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 24 06:46:32.758333 kubelet[2689]: I1124 06:46:32.758329 2689 topology_manager.go:138] "Creating topology manager with none policy" Nov 24 06:46:32.758333 kubelet[2689]: I1124 06:46:32.758336 2689 container_manager_linux.go:306] "Creating device plugin manager" Nov 24 06:46:32.758481 kubelet[2689]: I1124 06:46:32.758356 2689 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 24 06:46:32.759043 kubelet[2689]: I1124 06:46:32.759013 2689 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:32.759192 kubelet[2689]: I1124 06:46:32.759169 2689 kubelet.go:475] "Attempting to sync node with API server" Nov 24 06:46:32.759192 kubelet[2689]: I1124 06:46:32.759183 2689 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 24 06:46:32.759244 kubelet[2689]: I1124 06:46:32.759225 2689 kubelet.go:387] "Adding apiserver pod source" Nov 24 06:46:32.759244 kubelet[2689]: I1124 06:46:32.759241 2689 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 24 06:46:32.761073 kubelet[2689]: I1124 06:46:32.760988 2689 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 24 06:46:32.761653 kubelet[2689]: I1124 06:46:32.761627 2689 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 24 06:46:32.761701 kubelet[2689]: I1124 06:46:32.761655 2689 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 24 06:46:32.765191 kubelet[2689]: I1124 06:46:32.764771 2689 server.go:1262] "Started kubelet" Nov 24 06:46:32.766639 kubelet[2689]: I1124 06:46:32.766094 2689 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 24 06:46:32.768778 kubelet[2689]: I1124 06:46:32.767460 2689 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 24 06:46:32.768778 kubelet[2689]: I1124 06:46:32.768387 2689 server.go:310] "Adding debug handlers to kubelet server" Nov 24 06:46:32.769875 kubelet[2689]: I1124 06:46:32.769859 2689 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 24 06:46:32.770162 kubelet[2689]: E1124 06:46:32.770139 2689 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 24 06:46:32.770795 kubelet[2689]: I1124 06:46:32.770781 2689 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 24 06:46:32.771307 kubelet[2689]: I1124 06:46:32.771266 2689 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 24 06:46:32.771346 kubelet[2689]: I1124 06:46:32.771315 2689 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 24 06:46:32.771563 kubelet[2689]: I1124 06:46:32.771545 2689 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 24 06:46:32.772001 kubelet[2689]: I1124 06:46:32.771939 2689 reconciler.go:29] "Reconciler: start to sync state" Nov 24 06:46:32.772500 kubelet[2689]: I1124 06:46:32.772473 2689 factory.go:223] Registration of the systemd container factory successfully Nov 24 06:46:32.772702 kubelet[2689]: I1124 06:46:32.772669 2689 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 24 06:46:32.773140 kubelet[2689]: E1124 06:46:32.773118 2689 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 24 06:46:32.774459 kubelet[2689]: I1124 06:46:32.774384 2689 factory.go:223] Registration of the containerd container factory successfully Nov 24 06:46:32.780253 kubelet[2689]: I1124 06:46:32.780230 2689 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 24 06:46:32.783049 kubelet[2689]: I1124 06:46:32.783008 2689 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 24 06:46:32.785265 kubelet[2689]: I1124 06:46:32.785244 2689 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 24 06:46:32.785265 kubelet[2689]: I1124 06:46:32.785262 2689 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 24 06:46:32.785338 kubelet[2689]: I1124 06:46:32.785284 2689 kubelet.go:2427] "Starting kubelet main sync loop" Nov 24 06:46:32.786106 kubelet[2689]: E1124 06:46:32.785361 2689 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 24 06:46:32.807093 kubelet[2689]: I1124 06:46:32.807067 2689 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 24 06:46:32.807093 kubelet[2689]: I1124 06:46:32.807085 2689 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 24 06:46:32.807093 kubelet[2689]: I1124 06:46:32.807102 2689 state_mem.go:36] "Initialized new in-memory state store" Nov 24 06:46:32.807263 kubelet[2689]: I1124 06:46:32.807224 2689 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 24 06:46:32.807263 kubelet[2689]: I1124 06:46:32.807233 2689 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 24 06:46:32.807263 kubelet[2689]: I1124 06:46:32.807247 2689 policy_none.go:49] "None policy: Start" Nov 24 06:46:32.807263 kubelet[2689]: I1124 06:46:32.807256 2689 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 24 06:46:32.807263 kubelet[2689]: I1124 06:46:32.807265 2689 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 24 06:46:32.807376 kubelet[2689]: I1124 06:46:32.807342 2689 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 24 06:46:32.807376 kubelet[2689]: I1124 06:46:32.807349 2689 policy_none.go:47] "Start" Nov 24 06:46:32.811169 kubelet[2689]: E1124 06:46:32.811139 2689 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 24 06:46:32.811334 kubelet[2689]: I1124 06:46:32.811313 2689 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 24 06:46:32.811359 kubelet[2689]: I1124 06:46:32.811326 2689 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 24 06:46:32.811858 kubelet[2689]: I1124 06:46:32.811817 2689 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 24 06:46:32.812742 kubelet[2689]: E1124 06:46:32.812726 2689 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 24 06:46:32.887065 kubelet[2689]: I1124 06:46:32.886887 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:32.887065 kubelet[2689]: I1124 06:46:32.886949 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:32.887065 kubelet[2689]: I1124 06:46:32.887021 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:32.891820 kubelet[2689]: E1124 06:46:32.891779 2689 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:32.913006 kubelet[2689]: I1124 06:46:32.912985 2689 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 24 06:46:32.917524 kubelet[2689]: I1124 06:46:32.917498 2689 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 24 06:46:32.917640 kubelet[2689]: I1124 06:46:32.917567 2689 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 24 06:46:32.972778 kubelet[2689]: I1124 06:46:32.972745 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7b57e917ca4bf9e8cf873e0a0ba9cb5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7b57e917ca4bf9e8cf873e0a0ba9cb5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:32.972885 kubelet[2689]: I1124 06:46:32.972781 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7b57e917ca4bf9e8cf873e0a0ba9cb5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7b57e917ca4bf9e8cf873e0a0ba9cb5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:32.972885 kubelet[2689]: I1124 06:46:32.972800 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:32.972885 kubelet[2689]: I1124 06:46:32.972815 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:32.972885 kubelet[2689]: I1124 06:46:32.972830 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:32.972885 kubelet[2689]: I1124 06:46:32.972843 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7b57e917ca4bf9e8cf873e0a0ba9cb5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7b57e917ca4bf9e8cf873e0a0ba9cb5\") " pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:32.973031 kubelet[2689]: I1124 06:46:32.972856 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:32.973031 kubelet[2689]: I1124 06:46:32.972870 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/41694572f76b3db8403039f40dd5ea25-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"41694572f76b3db8403039f40dd5ea25\") " pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:32.973031 kubelet[2689]: I1124 06:46:32.972884 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7d0af91d0c9a9742236c44baa5e2751-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7d0af91d0c9a9742236c44baa5e2751\") " pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:33.760510 kubelet[2689]: I1124 06:46:33.760469 2689 apiserver.go:52] "Watching apiserver" Nov 24 06:46:33.771526 kubelet[2689]: I1124 06:46:33.771493 2689 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 24 06:46:33.796133 kubelet[2689]: I1124 06:46:33.796118 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:33.796333 kubelet[2689]: I1124 06:46:33.796303 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:33.796549 kubelet[2689]: I1124 06:46:33.796506 2689 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:33.853963 kubelet[2689]: E1124 06:46:33.853849 2689 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 24 06:46:33.855526 kubelet[2689]: E1124 06:46:33.855495 2689 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 24 06:46:33.855704 kubelet[2689]: E1124 06:46:33.855672 2689 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 24 06:46:33.873862 kubelet[2689]: I1124 06:46:33.873787 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.873767358 podStartE2EDuration="1.873767358s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:33.867660855 +0000 UTC m=+1.162312553" watchObservedRunningTime="2025-11-24 06:46:33.873767358 +0000 UTC m=+1.168419056" Nov 24 06:46:33.882875 kubelet[2689]: I1124 06:46:33.882767 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.882748736 podStartE2EDuration="1.882748736s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:33.874107257 +0000 UTC m=+1.168758955" watchObservedRunningTime="2025-11-24 06:46:33.882748736 +0000 UTC m=+1.177400434" Nov 24 06:46:33.894867 kubelet[2689]: I1124 06:46:33.894799 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.894780029 podStartE2EDuration="1.894780029s" podCreationTimestamp="2025-11-24 06:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:33.884666192 +0000 UTC m=+1.179317890" watchObservedRunningTime="2025-11-24 06:46:33.894780029 +0000 UTC m=+1.189431717" Nov 24 06:46:38.645334 kubelet[2689]: I1124 06:46:38.645297 2689 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 24 06:46:38.645814 containerd[1544]: time="2025-11-24T06:46:38.645767542Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 24 06:46:38.646184 kubelet[2689]: I1124 06:46:38.646008 2689 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 24 06:46:39.427718 systemd[1]: Created slice kubepods-besteffort-pod8c382d6e_91fa_45fb_b565_0b5fcdcea3b3.slice - libcontainer container kubepods-besteffort-pod8c382d6e_91fa_45fb_b565_0b5fcdcea3b3.slice. Nov 24 06:46:39.519748 kubelet[2689]: I1124 06:46:39.519694 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8c382d6e-91fa-45fb-b565-0b5fcdcea3b3-kube-proxy\") pod \"kube-proxy-dwzhp\" (UID: \"8c382d6e-91fa-45fb-b565-0b5fcdcea3b3\") " pod="kube-system/kube-proxy-dwzhp" Nov 24 06:46:39.519748 kubelet[2689]: I1124 06:46:39.519738 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c382d6e-91fa-45fb-b565-0b5fcdcea3b3-xtables-lock\") pod \"kube-proxy-dwzhp\" (UID: \"8c382d6e-91fa-45fb-b565-0b5fcdcea3b3\") " pod="kube-system/kube-proxy-dwzhp" Nov 24 06:46:39.519748 kubelet[2689]: I1124 06:46:39.519751 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c382d6e-91fa-45fb-b565-0b5fcdcea3b3-lib-modules\") pod \"kube-proxy-dwzhp\" (UID: \"8c382d6e-91fa-45fb-b565-0b5fcdcea3b3\") " pod="kube-system/kube-proxy-dwzhp" Nov 24 06:46:39.519947 kubelet[2689]: I1124 06:46:39.519770 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh6sr\" (UniqueName: \"kubernetes.io/projected/8c382d6e-91fa-45fb-b565-0b5fcdcea3b3-kube-api-access-vh6sr\") pod \"kube-proxy-dwzhp\" (UID: \"8c382d6e-91fa-45fb-b565-0b5fcdcea3b3\") " pod="kube-system/kube-proxy-dwzhp" Nov 24 06:46:39.686051 kubelet[2689]: E1124 06:46:39.685819 2689 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 24 06:46:39.686051 kubelet[2689]: E1124 06:46:39.685852 2689 projected.go:196] Error preparing data for projected volume kube-api-access-vh6sr for pod kube-system/kube-proxy-dwzhp: configmap "kube-root-ca.crt" not found Nov 24 06:46:39.686051 kubelet[2689]: E1124 06:46:39.685911 2689 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8c382d6e-91fa-45fb-b565-0b5fcdcea3b3-kube-api-access-vh6sr podName:8c382d6e-91fa-45fb-b565-0b5fcdcea3b3 nodeName:}" failed. No retries permitted until 2025-11-24 06:46:40.185889789 +0000 UTC m=+7.480541477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vh6sr" (UniqueName: "kubernetes.io/projected/8c382d6e-91fa-45fb-b565-0b5fcdcea3b3-kube-api-access-vh6sr") pod "kube-proxy-dwzhp" (UID: "8c382d6e-91fa-45fb-b565-0b5fcdcea3b3") : configmap "kube-root-ca.crt" not found Nov 24 06:46:39.783401 systemd[1]: Created slice kubepods-besteffort-pod743174c2_adf1_4481_8867_756e2448b29a.slice - libcontainer container kubepods-besteffort-pod743174c2_adf1_4481_8867_756e2448b29a.slice. Nov 24 06:46:39.822079 kubelet[2689]: I1124 06:46:39.822041 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9h2\" (UniqueName: \"kubernetes.io/projected/743174c2-adf1-4481-8867-756e2448b29a-kube-api-access-rh9h2\") pod \"tigera-operator-65cdcdfd6d-tfwps\" (UID: \"743174c2-adf1-4481-8867-756e2448b29a\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-tfwps" Nov 24 06:46:39.822079 kubelet[2689]: I1124 06:46:39.822073 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/743174c2-adf1-4481-8867-756e2448b29a-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-tfwps\" (UID: \"743174c2-adf1-4481-8867-756e2448b29a\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-tfwps" Nov 24 06:46:40.089218 containerd[1544]: time="2025-11-24T06:46:40.089168752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-tfwps,Uid:743174c2-adf1-4481-8867-756e2448b29a,Namespace:tigera-operator,Attempt:0,}" Nov 24 06:46:40.104967 containerd[1544]: time="2025-11-24T06:46:40.104927453Z" level=info msg="connecting to shim 5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55" address="unix:///run/containerd/s/87615856d30403fa5149270f2f4764f0f01819b221444aea6b45fdc684d9adfa" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:40.135611 systemd[1]: Started cri-containerd-5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55.scope - libcontainer container 5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55. Nov 24 06:46:40.182966 containerd[1544]: time="2025-11-24T06:46:40.182925131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-tfwps,Uid:743174c2-adf1-4481-8867-756e2448b29a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55\"" Nov 24 06:46:40.184403 containerd[1544]: time="2025-11-24T06:46:40.184382692Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 24 06:46:40.341545 containerd[1544]: time="2025-11-24T06:46:40.341419214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dwzhp,Uid:8c382d6e-91fa-45fb-b565-0b5fcdcea3b3,Namespace:kube-system,Attempt:0,}" Nov 24 06:46:40.364168 containerd[1544]: time="2025-11-24T06:46:40.364126645Z" level=info msg="connecting to shim e2ed5be8916ddbfd69ca50c3b60d7e92a08810aa5b2dee88cacd6cf90d4c2a54" address="unix:///run/containerd/s/26b32c590bb258c7fdfb2b704830bb24857aaf9d8d4ec3127f56ffff18401730" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:40.393658 systemd[1]: Started cri-containerd-e2ed5be8916ddbfd69ca50c3b60d7e92a08810aa5b2dee88cacd6cf90d4c2a54.scope - libcontainer container e2ed5be8916ddbfd69ca50c3b60d7e92a08810aa5b2dee88cacd6cf90d4c2a54. Nov 24 06:46:40.419464 containerd[1544]: time="2025-11-24T06:46:40.419408455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dwzhp,Uid:8c382d6e-91fa-45fb-b565-0b5fcdcea3b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2ed5be8916ddbfd69ca50c3b60d7e92a08810aa5b2dee88cacd6cf90d4c2a54\"" Nov 24 06:46:40.425720 containerd[1544]: time="2025-11-24T06:46:40.425674952Z" level=info msg="CreateContainer within sandbox \"e2ed5be8916ddbfd69ca50c3b60d7e92a08810aa5b2dee88cacd6cf90d4c2a54\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 24 06:46:40.436963 containerd[1544]: time="2025-11-24T06:46:40.436912085Z" level=info msg="Container 7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:40.446097 containerd[1544]: time="2025-11-24T06:46:40.446040515Z" level=info msg="CreateContainer within sandbox \"e2ed5be8916ddbfd69ca50c3b60d7e92a08810aa5b2dee88cacd6cf90d4c2a54\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679\"" Nov 24 06:46:40.446764 containerd[1544]: time="2025-11-24T06:46:40.446726363Z" level=info msg="StartContainer for \"7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679\"" Nov 24 06:46:40.448747 containerd[1544]: time="2025-11-24T06:46:40.448704679Z" level=info msg="connecting to shim 7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679" address="unix:///run/containerd/s/26b32c590bb258c7fdfb2b704830bb24857aaf9d8d4ec3127f56ffff18401730" protocol=ttrpc version=3 Nov 24 06:46:40.471566 systemd[1]: Started cri-containerd-7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679.scope - libcontainer container 7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679. Nov 24 06:46:40.574829 containerd[1544]: time="2025-11-24T06:46:40.574792863Z" level=info msg="StartContainer for \"7d8c3daf0250fafd6e1be5f91e0bd5e2efaba1c979609dc893238a285e7a5679\" returns successfully" Nov 24 06:46:40.823643 kubelet[2689]: I1124 06:46:40.823581 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dwzhp" podStartSLOduration=1.8235631140000002 podStartE2EDuration="1.823563114s" podCreationTimestamp="2025-11-24 06:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:46:40.823470087 +0000 UTC m=+8.118121785" watchObservedRunningTime="2025-11-24 06:46:40.823563114 +0000 UTC m=+8.118214802" Nov 24 06:46:41.279902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1802848604.mount: Deactivated successfully. Nov 24 06:46:41.594289 containerd[1544]: time="2025-11-24T06:46:41.594183280Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:41.595182 containerd[1544]: time="2025-11-24T06:46:41.595150573Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 24 06:46:41.596566 containerd[1544]: time="2025-11-24T06:46:41.596530885Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:41.598580 containerd[1544]: time="2025-11-24T06:46:41.598549343Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:41.599130 containerd[1544]: time="2025-11-24T06:46:41.599102278Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.414694578s" Nov 24 06:46:41.599168 containerd[1544]: time="2025-11-24T06:46:41.599131013Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 24 06:46:41.603457 containerd[1544]: time="2025-11-24T06:46:41.603394500Z" level=info msg="CreateContainer within sandbox \"5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 24 06:46:41.609799 containerd[1544]: time="2025-11-24T06:46:41.609776858Z" level=info msg="Container f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:41.618156 containerd[1544]: time="2025-11-24T06:46:41.618112971Z" level=info msg="CreateContainer within sandbox \"5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02\"" Nov 24 06:46:41.618548 containerd[1544]: time="2025-11-24T06:46:41.618514276Z" level=info msg="StartContainer for \"f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02\"" Nov 24 06:46:41.619228 containerd[1544]: time="2025-11-24T06:46:41.619204973Z" level=info msg="connecting to shim f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02" address="unix:///run/containerd/s/87615856d30403fa5149270f2f4764f0f01819b221444aea6b45fdc684d9adfa" protocol=ttrpc version=3 Nov 24 06:46:41.659578 systemd[1]: Started cri-containerd-f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02.scope - libcontainer container f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02. Nov 24 06:46:41.687784 containerd[1544]: time="2025-11-24T06:46:41.687744102Z" level=info msg="StartContainer for \"f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02\" returns successfully" Nov 24 06:46:41.838965 kubelet[2689]: I1124 06:46:41.838903 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-tfwps" podStartSLOduration=1.423189663 podStartE2EDuration="2.838884018s" podCreationTimestamp="2025-11-24 06:46:39 +0000 UTC" firstStartedPulling="2025-11-24 06:46:40.18415668 +0000 UTC m=+7.478808378" lastFinishedPulling="2025-11-24 06:46:41.599851035 +0000 UTC m=+8.894502733" observedRunningTime="2025-11-24 06:46:41.831596014 +0000 UTC m=+9.126247712" watchObservedRunningTime="2025-11-24 06:46:41.838884018 +0000 UTC m=+9.133535716" Nov 24 06:46:43.630678 systemd[1]: cri-containerd-f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02.scope: Deactivated successfully. Nov 24 06:46:43.636860 containerd[1544]: time="2025-11-24T06:46:43.635105470Z" level=info msg="received container exit event container_id:\"f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02\" id:\"f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02\" pid:3030 exit_status:1 exited_at:{seconds:1763966803 nanos:633156571}" Nov 24 06:46:43.670585 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02-rootfs.mount: Deactivated successfully. Nov 24 06:46:44.826881 kubelet[2689]: I1124 06:46:44.826848 2689 scope.go:117] "RemoveContainer" containerID="f89f2c9698b6e9104cda64f75d975d1463e70edc3a4910ef3dd5a48ed5e6fe02" Nov 24 06:46:44.831083 containerd[1544]: time="2025-11-24T06:46:44.831050616Z" level=info msg="CreateContainer within sandbox \"5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 24 06:46:44.841407 containerd[1544]: time="2025-11-24T06:46:44.840865433Z" level=info msg="Container e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:44.848223 containerd[1544]: time="2025-11-24T06:46:44.848190184Z" level=info msg="CreateContainer within sandbox \"5ec503408d45177e7bb561b0f2409db01959de951b2608917d4dc01522f17d55\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d\"" Nov 24 06:46:44.848720 containerd[1544]: time="2025-11-24T06:46:44.848692630Z" level=info msg="StartContainer for \"e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d\"" Nov 24 06:46:44.849516 containerd[1544]: time="2025-11-24T06:46:44.849493212Z" level=info msg="connecting to shim e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d" address="unix:///run/containerd/s/87615856d30403fa5149270f2f4764f0f01819b221444aea6b45fdc684d9adfa" protocol=ttrpc version=3 Nov 24 06:46:44.873574 systemd[1]: Started cri-containerd-e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d.scope - libcontainer container e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d. Nov 24 06:46:45.068822 containerd[1544]: time="2025-11-24T06:46:45.068775942Z" level=info msg="StartContainer for \"e7eda0dac6430bbdf8ed98405d95bc2fbccefbd9c52eb2facf65fae56823b59d\" returns successfully" Nov 24 06:46:46.680243 sudo[1752]: pam_unix(sudo:session): session closed for user root Nov 24 06:46:46.681720 sshd[1751]: Connection closed by 10.0.0.1 port 35978 Nov 24 06:46:46.682177 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Nov 24 06:46:46.686011 systemd[1]: sshd@6-10.0.0.32:22-10.0.0.1:35978.service: Deactivated successfully. Nov 24 06:46:46.688571 systemd[1]: session-7.scope: Deactivated successfully. Nov 24 06:46:46.688810 systemd[1]: session-7.scope: Consumed 5.905s CPU time, 231M memory peak. Nov 24 06:46:46.690738 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Nov 24 06:46:46.692338 systemd-logind[1532]: Removed session 7. Nov 24 06:46:48.820326 update_engine[1533]: I20251124 06:46:48.819474 1533 update_attempter.cc:509] Updating boot flags... Nov 24 06:46:52.316339 systemd[1]: Created slice kubepods-besteffort-podd25e8790_05ca_49d7_8e1e_105a063febcd.slice - libcontainer container kubepods-besteffort-podd25e8790_05ca_49d7_8e1e_105a063febcd.slice. Nov 24 06:46:52.402651 kubelet[2689]: I1124 06:46:52.402607 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d25e8790-05ca-49d7-8e1e-105a063febcd-tigera-ca-bundle\") pod \"calico-typha-cf99b78b5-fmwjt\" (UID: \"d25e8790-05ca-49d7-8e1e-105a063febcd\") " pod="calico-system/calico-typha-cf99b78b5-fmwjt" Nov 24 06:46:52.402651 kubelet[2689]: I1124 06:46:52.402651 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d25e8790-05ca-49d7-8e1e-105a063febcd-typha-certs\") pod \"calico-typha-cf99b78b5-fmwjt\" (UID: \"d25e8790-05ca-49d7-8e1e-105a063febcd\") " pod="calico-system/calico-typha-cf99b78b5-fmwjt" Nov 24 06:46:52.403105 kubelet[2689]: I1124 06:46:52.402669 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqvc\" (UniqueName: \"kubernetes.io/projected/d25e8790-05ca-49d7-8e1e-105a063febcd-kube-api-access-5zqvc\") pod \"calico-typha-cf99b78b5-fmwjt\" (UID: \"d25e8790-05ca-49d7-8e1e-105a063febcd\") " pod="calico-system/calico-typha-cf99b78b5-fmwjt" Nov 24 06:46:52.503663 systemd[1]: Created slice kubepods-besteffort-pod9d70f6e5_e5cf_4cee_9c71_26b9c38e0ce6.slice - libcontainer container kubepods-besteffort-pod9d70f6e5_e5cf_4cee_9c71_26b9c38e0ce6.slice. Nov 24 06:46:52.603352 kubelet[2689]: I1124 06:46:52.603245 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-cni-bin-dir\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603352 kubelet[2689]: I1124 06:46:52.603285 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-cni-net-dir\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603352 kubelet[2689]: I1124 06:46:52.603298 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-var-lib-calico\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603352 kubelet[2689]: I1124 06:46:52.603311 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-xtables-lock\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603352 kubelet[2689]: I1124 06:46:52.603327 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-tigera-ca-bundle\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603558 kubelet[2689]: I1124 06:46:52.603340 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-cni-log-dir\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603558 kubelet[2689]: I1124 06:46:52.603398 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-lib-modules\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603558 kubelet[2689]: I1124 06:46:52.603463 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgfdc\" (UniqueName: \"kubernetes.io/projected/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-kube-api-access-rgfdc\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603558 kubelet[2689]: I1124 06:46:52.603498 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-flexvol-driver-host\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603558 kubelet[2689]: I1124 06:46:52.603518 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-node-certs\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603700 kubelet[2689]: I1124 06:46:52.603535 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-policysync\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.603700 kubelet[2689]: I1124 06:46:52.603548 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6-var-run-calico\") pod \"calico-node-d585m\" (UID: \"9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6\") " pod="calico-system/calico-node-d585m" Nov 24 06:46:52.623861 containerd[1544]: time="2025-11-24T06:46:52.623820559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf99b78b5-fmwjt,Uid:d25e8790-05ca-49d7-8e1e-105a063febcd,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:52.661904 containerd[1544]: time="2025-11-24T06:46:52.661843483Z" level=info msg="connecting to shim 09c992acb3484fc6003273a4d091ab855890e9e6929c615e53b2647dcdd32624" address="unix:///run/containerd/s/399697844f555a9caa7584bbfa5540a7f60157cffbaca85b57f3fbb5d9e24c49" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:52.685611 systemd[1]: Started cri-containerd-09c992acb3484fc6003273a4d091ab855890e9e6929c615e53b2647dcdd32624.scope - libcontainer container 09c992acb3484fc6003273a4d091ab855890e9e6929c615e53b2647dcdd32624. Nov 24 06:46:52.706754 kubelet[2689]: E1124 06:46:52.706713 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.707021 kubelet[2689]: W1124 06:46:52.706981 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.707021 kubelet[2689]: E1124 06:46:52.707002 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.710459 kubelet[2689]: E1124 06:46:52.710380 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.710459 kubelet[2689]: W1124 06:46:52.710392 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.710459 kubelet[2689]: E1124 06:46:52.710403 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.711971 kubelet[2689]: E1124 06:46:52.711919 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:46:52.714031 kubelet[2689]: E1124 06:46:52.713991 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.714236 kubelet[2689]: W1124 06:46:52.714153 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.714236 kubelet[2689]: E1124 06:46:52.714185 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.739989 containerd[1544]: time="2025-11-24T06:46:52.739905912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf99b78b5-fmwjt,Uid:d25e8790-05ca-49d7-8e1e-105a063febcd,Namespace:calico-system,Attempt:0,} returns sandbox id \"09c992acb3484fc6003273a4d091ab855890e9e6929c615e53b2647dcdd32624\"" Nov 24 06:46:52.742420 containerd[1544]: time="2025-11-24T06:46:52.741759862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 24 06:46:52.786735 kubelet[2689]: E1124 06:46:52.786702 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.786735 kubelet[2689]: W1124 06:46:52.786725 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.786925 kubelet[2689]: E1124 06:46:52.786850 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.787428 kubelet[2689]: E1124 06:46:52.787264 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.787428 kubelet[2689]: W1124 06:46:52.787279 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.787428 kubelet[2689]: E1124 06:46:52.787288 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.788040 kubelet[2689]: E1124 06:46:52.787599 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.788040 kubelet[2689]: W1124 06:46:52.787642 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.788040 kubelet[2689]: E1124 06:46:52.787651 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.788040 kubelet[2689]: E1124 06:46:52.787946 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.788040 kubelet[2689]: W1124 06:46:52.787980 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.788040 kubelet[2689]: E1124 06:46:52.787990 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.788294 kubelet[2689]: E1124 06:46:52.788245 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.788294 kubelet[2689]: W1124 06:46:52.788265 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.788294 kubelet[2689]: E1124 06:46:52.788273 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.788502 kubelet[2689]: E1124 06:46:52.788487 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.788502 kubelet[2689]: W1124 06:46:52.788498 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.788660 kubelet[2689]: E1124 06:46:52.788506 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.788746 kubelet[2689]: E1124 06:46:52.788733 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.788746 kubelet[2689]: W1124 06:46:52.788742 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.788827 kubelet[2689]: E1124 06:46:52.788751 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.789056 kubelet[2689]: E1124 06:46:52.789011 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.789056 kubelet[2689]: W1124 06:46:52.789035 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.789056 kubelet[2689]: E1124 06:46:52.789059 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.789354 kubelet[2689]: E1124 06:46:52.789339 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.789421 kubelet[2689]: W1124 06:46:52.789350 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.789421 kubelet[2689]: E1124 06:46:52.789375 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.789612 kubelet[2689]: E1124 06:46:52.789597 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.789612 kubelet[2689]: W1124 06:46:52.789605 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.789612 kubelet[2689]: E1124 06:46:52.789613 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.789812 kubelet[2689]: E1124 06:46:52.789803 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.789812 kubelet[2689]: W1124 06:46:52.789810 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.789859 kubelet[2689]: E1124 06:46:52.789843 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.790013 kubelet[2689]: E1124 06:46:52.789984 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.790013 kubelet[2689]: W1124 06:46:52.789999 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.790013 kubelet[2689]: E1124 06:46:52.790007 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.790231 kubelet[2689]: E1124 06:46:52.790211 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.790231 kubelet[2689]: W1124 06:46:52.790224 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.790322 kubelet[2689]: E1124 06:46:52.790237 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.790521 kubelet[2689]: E1124 06:46:52.790399 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.790521 kubelet[2689]: W1124 06:46:52.790407 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.790521 kubelet[2689]: E1124 06:46:52.790415 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.790808 kubelet[2689]: E1124 06:46:52.790594 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.790808 kubelet[2689]: W1124 06:46:52.790601 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.790808 kubelet[2689]: E1124 06:46:52.790608 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.790808 kubelet[2689]: E1124 06:46:52.790773 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.790808 kubelet[2689]: W1124 06:46:52.790779 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.790808 kubelet[2689]: E1124 06:46:52.790786 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.791328 kubelet[2689]: E1124 06:46:52.790920 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.791328 kubelet[2689]: W1124 06:46:52.790927 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.791328 kubelet[2689]: E1124 06:46:52.790933 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.791328 kubelet[2689]: E1124 06:46:52.791117 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.791328 kubelet[2689]: W1124 06:46:52.791124 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.791328 kubelet[2689]: E1124 06:46:52.791131 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.791485 kubelet[2689]: E1124 06:46:52.791367 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.791485 kubelet[2689]: W1124 06:46:52.791375 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.791485 kubelet[2689]: E1124 06:46:52.791383 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.792390 kubelet[2689]: E1124 06:46:52.791586 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.792390 kubelet[2689]: W1124 06:46:52.791597 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.792390 kubelet[2689]: E1124 06:46:52.791605 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.805179 kubelet[2689]: E1124 06:46:52.805148 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.805179 kubelet[2689]: W1124 06:46:52.805165 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.805179 kubelet[2689]: E1124 06:46:52.805182 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.805386 kubelet[2689]: I1124 06:46:52.805205 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/de4b2b6f-2e35-454b-b826-35c899986b61-socket-dir\") pod \"csi-node-driver-8z57g\" (UID: \"de4b2b6f-2e35-454b-b826-35c899986b61\") " pod="calico-system/csi-node-driver-8z57g" Nov 24 06:46:52.805573 kubelet[2689]: E1124 06:46:52.805552 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.805573 kubelet[2689]: W1124 06:46:52.805564 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.805573 kubelet[2689]: E1124 06:46:52.805573 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.805837 kubelet[2689]: I1124 06:46:52.805590 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/de4b2b6f-2e35-454b-b826-35c899986b61-kubelet-dir\") pod \"csi-node-driver-8z57g\" (UID: \"de4b2b6f-2e35-454b-b826-35c899986b61\") " pod="calico-system/csi-node-driver-8z57g" Nov 24 06:46:52.805926 kubelet[2689]: E1124 06:46:52.805896 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.806191 kubelet[2689]: W1124 06:46:52.805922 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.806191 kubelet[2689]: E1124 06:46:52.805981 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.806609 kubelet[2689]: E1124 06:46:52.806527 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.806609 kubelet[2689]: W1124 06:46:52.806576 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.806609 kubelet[2689]: E1124 06:46:52.806592 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.807073 kubelet[2689]: E1124 06:46:52.807059 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.807127 kubelet[2689]: W1124 06:46:52.807069 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.807127 kubelet[2689]: E1124 06:46:52.807090 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.807186 kubelet[2689]: I1124 06:46:52.807153 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/de4b2b6f-2e35-454b-b826-35c899986b61-registration-dir\") pod \"csi-node-driver-8z57g\" (UID: \"de4b2b6f-2e35-454b-b826-35c899986b61\") " pod="calico-system/csi-node-driver-8z57g" Nov 24 06:46:52.807407 kubelet[2689]: E1124 06:46:52.807394 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.807407 kubelet[2689]: W1124 06:46:52.807404 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.807500 kubelet[2689]: E1124 06:46:52.807413 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.807612 kubelet[2689]: I1124 06:46:52.807535 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzmrw\" (UniqueName: \"kubernetes.io/projected/de4b2b6f-2e35-454b-b826-35c899986b61-kube-api-access-pzmrw\") pod \"csi-node-driver-8z57g\" (UID: \"de4b2b6f-2e35-454b-b826-35c899986b61\") " pod="calico-system/csi-node-driver-8z57g" Nov 24 06:46:52.807661 kubelet[2689]: E1124 06:46:52.807653 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.807661 kubelet[2689]: W1124 06:46:52.807660 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.807729 kubelet[2689]: E1124 06:46:52.807668 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.807892 kubelet[2689]: E1124 06:46:52.807879 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.807892 kubelet[2689]: W1124 06:46:52.807888 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.807988 kubelet[2689]: E1124 06:46:52.807896 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.808097 kubelet[2689]: E1124 06:46:52.808085 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.808140 kubelet[2689]: W1124 06:46:52.808093 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.808140 kubelet[2689]: E1124 06:46:52.808124 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.808209 kubelet[2689]: I1124 06:46:52.808141 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/de4b2b6f-2e35-454b-b826-35c899986b61-varrun\") pod \"csi-node-driver-8z57g\" (UID: \"de4b2b6f-2e35-454b-b826-35c899986b61\") " pod="calico-system/csi-node-driver-8z57g" Nov 24 06:46:52.808385 kubelet[2689]: E1124 06:46:52.808370 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.808385 kubelet[2689]: W1124 06:46:52.808381 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.808489 kubelet[2689]: E1124 06:46:52.808389 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.808588 kubelet[2689]: E1124 06:46:52.808576 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.808715 kubelet[2689]: W1124 06:46:52.808609 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.808715 kubelet[2689]: E1124 06:46:52.808619 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.808895 kubelet[2689]: E1124 06:46:52.808864 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.808895 kubelet[2689]: W1124 06:46:52.808875 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.808895 kubelet[2689]: E1124 06:46:52.808883 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.809131 kubelet[2689]: E1124 06:46:52.809044 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.809131 kubelet[2689]: W1124 06:46:52.809051 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.809131 kubelet[2689]: E1124 06:46:52.809058 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.809377 kubelet[2689]: E1124 06:46:52.809349 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.809377 kubelet[2689]: W1124 06:46:52.809365 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.809377 kubelet[2689]: E1124 06:46:52.809379 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.809958 kubelet[2689]: E1124 06:46:52.809607 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.809958 kubelet[2689]: W1124 06:46:52.809616 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.809958 kubelet[2689]: E1124 06:46:52.809626 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.812776 containerd[1544]: time="2025-11-24T06:46:52.812749219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d585m,Uid:9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6,Namespace:calico-system,Attempt:0,}" Nov 24 06:46:52.835816 containerd[1544]: time="2025-11-24T06:46:52.835770463Z" level=info msg="connecting to shim 2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b" address="unix:///run/containerd/s/a868de683298aca8da0efa385585e7f28a8a19cd0604fcd0a7bd5ba81248f964" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:46:52.865623 systemd[1]: Started cri-containerd-2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b.scope - libcontainer container 2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b. Nov 24 06:46:52.909319 kubelet[2689]: E1124 06:46:52.909294 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.909319 kubelet[2689]: W1124 06:46:52.909310 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.909319 kubelet[2689]: E1124 06:46:52.909326 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.909542 kubelet[2689]: E1124 06:46:52.909529 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.909542 kubelet[2689]: W1124 06:46:52.909541 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.909600 kubelet[2689]: E1124 06:46:52.909553 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.909747 kubelet[2689]: E1124 06:46:52.909736 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.909747 kubelet[2689]: W1124 06:46:52.909744 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.909793 kubelet[2689]: E1124 06:46:52.909751 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.909944 kubelet[2689]: E1124 06:46:52.909926 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.909944 kubelet[2689]: W1124 06:46:52.909938 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.909995 kubelet[2689]: E1124 06:46:52.909947 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.910104 kubelet[2689]: E1124 06:46:52.910093 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.910104 kubelet[2689]: W1124 06:46:52.910103 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.910146 kubelet[2689]: E1124 06:46:52.910109 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.910252 kubelet[2689]: E1124 06:46:52.910242 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.910252 kubelet[2689]: W1124 06:46:52.910249 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.910304 kubelet[2689]: E1124 06:46:52.910256 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.910509 kubelet[2689]: E1124 06:46:52.910496 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.910509 kubelet[2689]: W1124 06:46:52.910504 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.910568 kubelet[2689]: E1124 06:46:52.910511 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.910688 kubelet[2689]: E1124 06:46:52.910676 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.910688 kubelet[2689]: W1124 06:46:52.910685 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.910735 kubelet[2689]: E1124 06:46:52.910694 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.910850 kubelet[2689]: E1124 06:46:52.910839 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.910850 kubelet[2689]: W1124 06:46:52.910847 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.910894 kubelet[2689]: E1124 06:46:52.910854 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.910999 kubelet[2689]: E1124 06:46:52.910988 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.910999 kubelet[2689]: W1124 06:46:52.910997 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.911046 kubelet[2689]: E1124 06:46:52.911003 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.911189 kubelet[2689]: E1124 06:46:52.911163 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.911189 kubelet[2689]: W1124 06:46:52.911175 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.911189 kubelet[2689]: E1124 06:46:52.911184 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.911364 kubelet[2689]: E1124 06:46:52.911359 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.911392 kubelet[2689]: W1124 06:46:52.911366 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.911392 kubelet[2689]: E1124 06:46:52.911374 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.911563 kubelet[2689]: E1124 06:46:52.911539 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.911563 kubelet[2689]: W1124 06:46:52.911550 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.911563 kubelet[2689]: E1124 06:46:52.911557 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.911724 kubelet[2689]: E1124 06:46:52.911710 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.911724 kubelet[2689]: W1124 06:46:52.911719 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.911770 kubelet[2689]: E1124 06:46:52.911726 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.911891 kubelet[2689]: E1124 06:46:52.911877 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.911891 kubelet[2689]: W1124 06:46:52.911886 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.911933 kubelet[2689]: E1124 06:46:52.911894 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.912050 kubelet[2689]: E1124 06:46:52.912036 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.912050 kubelet[2689]: W1124 06:46:52.912046 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.912097 kubelet[2689]: E1124 06:46:52.912053 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.912195 kubelet[2689]: E1124 06:46:52.912182 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.912195 kubelet[2689]: W1124 06:46:52.912191 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.912237 kubelet[2689]: E1124 06:46:52.912199 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.912365 kubelet[2689]: E1124 06:46:52.912352 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.912365 kubelet[2689]: W1124 06:46:52.912360 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.912408 kubelet[2689]: E1124 06:46:52.912367 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.912526 kubelet[2689]: E1124 06:46:52.912511 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.912526 kubelet[2689]: W1124 06:46:52.912521 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.912580 kubelet[2689]: E1124 06:46:52.912528 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.912699 kubelet[2689]: E1124 06:46:52.912684 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.912699 kubelet[2689]: W1124 06:46:52.912694 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.912699 kubelet[2689]: E1124 06:46:52.912701 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.912851 kubelet[2689]: E1124 06:46:52.912840 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.912851 kubelet[2689]: W1124 06:46:52.912848 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.912899 kubelet[2689]: E1124 06:46:52.912855 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.913021 kubelet[2689]: E1124 06:46:52.913009 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.913021 kubelet[2689]: W1124 06:46:52.913019 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.913072 kubelet[2689]: E1124 06:46:52.913028 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.913214 kubelet[2689]: E1124 06:46:52.913200 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.913214 kubelet[2689]: W1124 06:46:52.913210 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.913288 kubelet[2689]: E1124 06:46:52.913219 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.913465 kubelet[2689]: E1124 06:46:52.913421 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.913465 kubelet[2689]: W1124 06:46:52.913433 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.913465 kubelet[2689]: E1124 06:46:52.913455 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.913657 kubelet[2689]: E1124 06:46:52.913612 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.913657 kubelet[2689]: W1124 06:46:52.913623 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.913657 kubelet[2689]: E1124 06:46:52.913639 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.919525 kubelet[2689]: E1124 06:46:52.919508 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:52.919525 kubelet[2689]: W1124 06:46:52.919520 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:52.919615 kubelet[2689]: E1124 06:46:52.919530 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:52.933915 containerd[1544]: time="2025-11-24T06:46:52.933871338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-d585m,Uid:9d70f6e5-e5cf-4cee-9c71-26b9c38e0ce6,Namespace:calico-system,Attempt:0,} returns sandbox id \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\"" Nov 24 06:46:54.093145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151801554.mount: Deactivated successfully. Nov 24 06:46:54.433212 containerd[1544]: time="2025-11-24T06:46:54.433092825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:54.433842 containerd[1544]: time="2025-11-24T06:46:54.433818328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 24 06:46:54.434947 containerd[1544]: time="2025-11-24T06:46:54.434912868Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:54.436876 containerd[1544]: time="2025-11-24T06:46:54.436845363Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:54.437368 containerd[1544]: time="2025-11-24T06:46:54.437330161Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.695544821s" Nov 24 06:46:54.437368 containerd[1544]: time="2025-11-24T06:46:54.437363474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 24 06:46:54.442914 containerd[1544]: time="2025-11-24T06:46:54.442890519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 24 06:46:54.454632 containerd[1544]: time="2025-11-24T06:46:54.454584440Z" level=info msg="CreateContainer within sandbox \"09c992acb3484fc6003273a4d091ab855890e9e6929c615e53b2647dcdd32624\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 24 06:46:54.462467 containerd[1544]: time="2025-11-24T06:46:54.462414532Z" level=info msg="Container 128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:54.469416 containerd[1544]: time="2025-11-24T06:46:54.469381580Z" level=info msg="CreateContainer within sandbox \"09c992acb3484fc6003273a4d091ab855890e9e6929c615e53b2647dcdd32624\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b\"" Nov 24 06:46:54.469825 containerd[1544]: time="2025-11-24T06:46:54.469798760Z" level=info msg="StartContainer for \"128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b\"" Nov 24 06:46:54.470826 containerd[1544]: time="2025-11-24T06:46:54.470801617Z" level=info msg="connecting to shim 128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b" address="unix:///run/containerd/s/399697844f555a9caa7584bbfa5540a7f60157cffbaca85b57f3fbb5d9e24c49" protocol=ttrpc version=3 Nov 24 06:46:54.492583 systemd[1]: Started cri-containerd-128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b.scope - libcontainer container 128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b. Nov 24 06:46:54.546672 containerd[1544]: time="2025-11-24T06:46:54.546626561Z" level=info msg="StartContainer for \"128091f7cf717117eec3e634c53142816534208ffff6defc7fcddc9b01d8b83b\" returns successfully" Nov 24 06:46:54.792366 kubelet[2689]: E1124 06:46:54.792027 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:46:54.889343 kubelet[2689]: I1124 06:46:54.889055 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cf99b78b5-fmwjt" podStartSLOduration=1.187612158 podStartE2EDuration="2.889040849s" podCreationTimestamp="2025-11-24 06:46:52 +0000 UTC" firstStartedPulling="2025-11-24 06:46:52.74126197 +0000 UTC m=+20.035913668" lastFinishedPulling="2025-11-24 06:46:54.44269066 +0000 UTC m=+21.737342359" observedRunningTime="2025-11-24 06:46:54.888740831 +0000 UTC m=+22.183392529" watchObservedRunningTime="2025-11-24 06:46:54.889040849 +0000 UTC m=+22.183692547" Nov 24 06:46:54.904999 kubelet[2689]: E1124 06:46:54.904938 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.904999 kubelet[2689]: W1124 06:46:54.904963 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.904999 kubelet[2689]: E1124 06:46:54.904983 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.905335 kubelet[2689]: E1124 06:46:54.905181 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.905335 kubelet[2689]: W1124 06:46:54.905187 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.905335 kubelet[2689]: E1124 06:46:54.905195 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.905521 kubelet[2689]: E1124 06:46:54.905505 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.905521 kubelet[2689]: W1124 06:46:54.905516 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.905585 kubelet[2689]: E1124 06:46:54.905525 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.905829 kubelet[2689]: E1124 06:46:54.905810 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.905829 kubelet[2689]: W1124 06:46:54.905820 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.905829 kubelet[2689]: E1124 06:46:54.905828 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.906036 kubelet[2689]: E1124 06:46:54.906019 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.906036 kubelet[2689]: W1124 06:46:54.906029 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.906102 kubelet[2689]: E1124 06:46:54.906040 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.906233 kubelet[2689]: E1124 06:46:54.906213 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.906233 kubelet[2689]: W1124 06:46:54.906229 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.906289 kubelet[2689]: E1124 06:46:54.906241 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.906407 kubelet[2689]: E1124 06:46:54.906393 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.906483 kubelet[2689]: W1124 06:46:54.906403 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.906483 kubelet[2689]: E1124 06:46:54.906420 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.906623 kubelet[2689]: E1124 06:46:54.906601 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.906647 kubelet[2689]: W1124 06:46:54.906611 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.906647 kubelet[2689]: E1124 06:46:54.906636 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.906822 kubelet[2689]: E1124 06:46:54.906807 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.906822 kubelet[2689]: W1124 06:46:54.906817 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.906880 kubelet[2689]: E1124 06:46:54.906825 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.906975 kubelet[2689]: E1124 06:46:54.906956 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.906975 kubelet[2689]: W1124 06:46:54.906968 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.906975 kubelet[2689]: E1124 06:46:54.906975 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.907183 kubelet[2689]: E1124 06:46:54.907103 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.907183 kubelet[2689]: W1124 06:46:54.907110 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.907266 kubelet[2689]: E1124 06:46:54.907217 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.907637 kubelet[2689]: E1124 06:46:54.907594 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.907637 kubelet[2689]: W1124 06:46:54.907606 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.907637 kubelet[2689]: E1124 06:46:54.907622 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.908059 kubelet[2689]: E1124 06:46:54.908022 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.908059 kubelet[2689]: W1124 06:46:54.908036 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.908059 kubelet[2689]: E1124 06:46:54.908045 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.908344 kubelet[2689]: E1124 06:46:54.908323 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.908344 kubelet[2689]: W1124 06:46:54.908340 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.908402 kubelet[2689]: E1124 06:46:54.908351 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.908965 kubelet[2689]: E1124 06:46:54.908553 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.908965 kubelet[2689]: W1124 06:46:54.908567 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.908965 kubelet[2689]: E1124 06:46:54.908578 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.925023 kubelet[2689]: E1124 06:46:54.924996 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.925023 kubelet[2689]: W1124 06:46:54.925020 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.925102 kubelet[2689]: E1124 06:46:54.925042 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.925251 kubelet[2689]: E1124 06:46:54.925228 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.925251 kubelet[2689]: W1124 06:46:54.925241 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.925302 kubelet[2689]: E1124 06:46:54.925251 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.925582 kubelet[2689]: E1124 06:46:54.925555 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.925609 kubelet[2689]: W1124 06:46:54.925580 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.925609 kubelet[2689]: E1124 06:46:54.925600 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.925797 kubelet[2689]: E1124 06:46:54.925783 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.925797 kubelet[2689]: W1124 06:46:54.925792 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.925851 kubelet[2689]: E1124 06:46:54.925799 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.925981 kubelet[2689]: E1124 06:46:54.925968 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.925981 kubelet[2689]: W1124 06:46:54.925977 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.926025 kubelet[2689]: E1124 06:46:54.925984 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.926190 kubelet[2689]: E1124 06:46:54.926175 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.926190 kubelet[2689]: W1124 06:46:54.926185 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.926247 kubelet[2689]: E1124 06:46:54.926193 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.926468 kubelet[2689]: E1124 06:46:54.926453 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.926468 kubelet[2689]: W1124 06:46:54.926465 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.926513 kubelet[2689]: E1124 06:46:54.926474 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.926707 kubelet[2689]: E1124 06:46:54.926672 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.926707 kubelet[2689]: W1124 06:46:54.926688 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.926707 kubelet[2689]: E1124 06:46:54.926701 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.926918 kubelet[2689]: E1124 06:46:54.926903 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.926918 kubelet[2689]: W1124 06:46:54.926913 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.926966 kubelet[2689]: E1124 06:46:54.926920 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.927082 kubelet[2689]: E1124 06:46:54.927069 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.927082 kubelet[2689]: W1124 06:46:54.927078 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.927129 kubelet[2689]: E1124 06:46:54.927085 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.927282 kubelet[2689]: E1124 06:46:54.927268 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.927282 kubelet[2689]: W1124 06:46:54.927277 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.927326 kubelet[2689]: E1124 06:46:54.927285 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.927539 kubelet[2689]: E1124 06:46:54.927523 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.927539 kubelet[2689]: W1124 06:46:54.927536 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.927595 kubelet[2689]: E1124 06:46:54.927545 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.927765 kubelet[2689]: E1124 06:46:54.927752 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.927765 kubelet[2689]: W1124 06:46:54.927761 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.927812 kubelet[2689]: E1124 06:46:54.927768 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.927947 kubelet[2689]: E1124 06:46:54.927934 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.927947 kubelet[2689]: W1124 06:46:54.927942 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.927995 kubelet[2689]: E1124 06:46:54.927950 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.928119 kubelet[2689]: E1124 06:46:54.928106 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.928119 kubelet[2689]: W1124 06:46:54.928115 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.928160 kubelet[2689]: E1124 06:46:54.928122 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.928311 kubelet[2689]: E1124 06:46:54.928298 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.928311 kubelet[2689]: W1124 06:46:54.928307 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.928359 kubelet[2689]: E1124 06:46:54.928315 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.928588 kubelet[2689]: E1124 06:46:54.928574 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.928588 kubelet[2689]: W1124 06:46:54.928586 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.928646 kubelet[2689]: E1124 06:46:54.928596 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:54.928785 kubelet[2689]: E1124 06:46:54.928771 2689 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 24 06:46:54.928785 kubelet[2689]: W1124 06:46:54.928781 2689 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 24 06:46:54.928833 kubelet[2689]: E1124 06:46:54.928789 2689 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 24 06:46:55.729715 containerd[1544]: time="2025-11-24T06:46:55.729671301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:55.730429 containerd[1544]: time="2025-11-24T06:46:55.730408095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 24 06:46:55.731497 containerd[1544]: time="2025-11-24T06:46:55.731473669Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:55.733413 containerd[1544]: time="2025-11-24T06:46:55.733355999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:55.733817 containerd[1544]: time="2025-11-24T06:46:55.733779750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.290863163s" Nov 24 06:46:55.733817 containerd[1544]: time="2025-11-24T06:46:55.733807993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 24 06:46:55.738022 containerd[1544]: time="2025-11-24T06:46:55.737994680Z" level=info msg="CreateContainer within sandbox \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 24 06:46:55.746595 containerd[1544]: time="2025-11-24T06:46:55.746559476Z" level=info msg="Container fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:55.756225 containerd[1544]: time="2025-11-24T06:46:55.756187091Z" level=info msg="CreateContainer within sandbox \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071\"" Nov 24 06:46:55.756723 containerd[1544]: time="2025-11-24T06:46:55.756687818Z" level=info msg="StartContainer for \"fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071\"" Nov 24 06:46:55.758058 containerd[1544]: time="2025-11-24T06:46:55.758033132Z" level=info msg="connecting to shim fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071" address="unix:///run/containerd/s/a868de683298aca8da0efa385585e7f28a8a19cd0604fcd0a7bd5ba81248f964" protocol=ttrpc version=3 Nov 24 06:46:55.784564 systemd[1]: Started cri-containerd-fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071.scope - libcontainer container fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071. Nov 24 06:46:55.852848 containerd[1544]: time="2025-11-24T06:46:55.852809458Z" level=info msg="StartContainer for \"fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071\" returns successfully" Nov 24 06:46:55.862292 systemd[1]: cri-containerd-fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071.scope: Deactivated successfully. Nov 24 06:46:55.864789 containerd[1544]: time="2025-11-24T06:46:55.864755698Z" level=info msg="received container exit event container_id:\"fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071\" id:\"fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071\" pid:3454 exited_at:{seconds:1763966815 nanos:864515143}" Nov 24 06:46:55.892194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc04907a687d3ce41b74b46e19a78701ff47ef6e0ade026e18f20f8d16a3e071-rootfs.mount: Deactivated successfully. Nov 24 06:46:56.785955 kubelet[2689]: E1124 06:46:56.785896 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:46:56.877329 containerd[1544]: time="2025-11-24T06:46:56.877292738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 24 06:46:58.788709 kubelet[2689]: E1124 06:46:58.788661 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:46:59.226119 containerd[1544]: time="2025-11-24T06:46:59.226078039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:59.227504 containerd[1544]: time="2025-11-24T06:46:59.227089219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 24 06:46:59.228453 containerd[1544]: time="2025-11-24T06:46:59.228397390Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:59.230550 containerd[1544]: time="2025-11-24T06:46:59.230504939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:46:59.231052 containerd[1544]: time="2025-11-24T06:46:59.231011276Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.353685835s" Nov 24 06:46:59.231052 containerd[1544]: time="2025-11-24T06:46:59.231038327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 24 06:46:59.235367 containerd[1544]: time="2025-11-24T06:46:59.235338017Z" level=info msg="CreateContainer within sandbox \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 24 06:46:59.244782 containerd[1544]: time="2025-11-24T06:46:59.244735885Z" level=info msg="Container 3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:46:59.255731 containerd[1544]: time="2025-11-24T06:46:59.255669904Z" level=info msg="CreateContainer within sandbox \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e\"" Nov 24 06:46:59.256198 containerd[1544]: time="2025-11-24T06:46:59.256164398Z" level=info msg="StartContainer for \"3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e\"" Nov 24 06:46:59.257368 containerd[1544]: time="2025-11-24T06:46:59.257349076Z" level=info msg="connecting to shim 3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e" address="unix:///run/containerd/s/a868de683298aca8da0efa385585e7f28a8a19cd0604fcd0a7bd5ba81248f964" protocol=ttrpc version=3 Nov 24 06:46:59.281570 systemd[1]: Started cri-containerd-3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e.scope - libcontainer container 3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e. Nov 24 06:46:59.380789 containerd[1544]: time="2025-11-24T06:46:59.380741947Z" level=info msg="StartContainer for \"3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e\" returns successfully" Nov 24 06:47:00.187250 systemd[1]: cri-containerd-3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e.scope: Deactivated successfully. Nov 24 06:47:00.187754 systemd[1]: cri-containerd-3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e.scope: Consumed 654ms CPU time, 177.8M memory peak, 3.3M read from disk, 171.3M written to disk. Nov 24 06:47:00.188521 containerd[1544]: time="2025-11-24T06:47:00.188413969Z" level=info msg="received container exit event container_id:\"3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e\" id:\"3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e\" pid:3513 exited_at:{seconds:1763966820 nanos:188082223}" Nov 24 06:47:00.192705 containerd[1544]: time="2025-11-24T06:47:00.192676357Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 24 06:47:00.203898 kubelet[2689]: I1124 06:47:00.203857 2689 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 24 06:47:00.212685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fef1c21cf084f5ed92ea2be0fa00e24d326adf97e8f21031a5991bc3d6c807e-rootfs.mount: Deactivated successfully. Nov 24 06:47:00.412134 systemd[1]: Created slice kubepods-besteffort-podd7e9defc_dc9d_4ff9_ae1d_ab935f2e0e9f.slice - libcontainer container kubepods-besteffort-podd7e9defc_dc9d_4ff9_ae1d_ab935f2e0e9f.slice. Nov 24 06:47:00.424888 systemd[1]: Created slice kubepods-burstable-pod598e36cd_b984_4495_bdd1_88d1ae40f5c0.slice - libcontainer container kubepods-burstable-pod598e36cd_b984_4495_bdd1_88d1ae40f5c0.slice. Nov 24 06:47:00.434600 systemd[1]: Created slice kubepods-besteffort-pod86381663_d21f_4c14_bc69_3f80735f20fe.slice - libcontainer container kubepods-besteffort-pod86381663_d21f_4c14_bc69_3f80735f20fe.slice. Nov 24 06:47:00.440013 systemd[1]: Created slice kubepods-burstable-pod7c1ea070_2521_4273_b2b3_9736eaffd427.slice - libcontainer container kubepods-burstable-pod7c1ea070_2521_4273_b2b3_9736eaffd427.slice. Nov 24 06:47:00.446283 systemd[1]: Created slice kubepods-besteffort-pode17ed833_91af_49ab_901e_293fc1161607.slice - libcontainer container kubepods-besteffort-pode17ed833_91af_49ab_901e_293fc1161607.slice. Nov 24 06:47:00.452780 systemd[1]: Created slice kubepods-besteffort-pod181422e2_5cc7_4e92_bf3a_1c5d7cb34c3e.slice - libcontainer container kubepods-besteffort-pod181422e2_5cc7_4e92_bf3a_1c5d7cb34c3e.slice. Nov 24 06:47:00.458993 systemd[1]: Created slice kubepods-besteffort-podcff648c4_cfae_4330_83e3_56fc17913402.slice - libcontainer container kubepods-besteffort-podcff648c4_cfae_4330_83e3_56fc17913402.slice. Nov 24 06:47:00.465486 systemd[1]: Created slice kubepods-besteffort-pod74685ae7_a0a3_452c_92e5_934da9ec5504.slice - libcontainer container kubepods-besteffort-pod74685ae7_a0a3_452c_92e5_934da9ec5504.slice. Nov 24 06:47:00.469883 kubelet[2689]: I1124 06:47:00.469845 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e17ed833-91af-49ab-901e-293fc1161607-calico-apiserver-certs\") pod \"calico-apiserver-7785576499-vjzwh\" (UID: \"e17ed833-91af-49ab-901e-293fc1161607\") " pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" Nov 24 06:47:00.469883 kubelet[2689]: I1124 06:47:00.469877 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/598e36cd-b984-4495-bdd1-88d1ae40f5c0-config-volume\") pod \"coredns-66bc5c9577-52wxk\" (UID: \"598e36cd-b984-4495-bdd1-88d1ae40f5c0\") " pod="kube-system/coredns-66bc5c9577-52wxk" Nov 24 06:47:00.470088 kubelet[2689]: I1124 06:47:00.469929 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/86381663-d21f-4c14-bc69-3f80735f20fe-calico-apiserver-certs\") pod \"calico-apiserver-7785576499-wrvhp\" (UID: \"86381663-d21f-4c14-bc69-3f80735f20fe\") " pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" Nov 24 06:47:00.470088 kubelet[2689]: I1124 06:47:00.469965 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx6cx\" (UniqueName: \"kubernetes.io/projected/cff648c4-cfae-4330-83e3-56fc17913402-kube-api-access-kx6cx\") pod \"calico-apiserver-768c95b4f7-bql9j\" (UID: \"cff648c4-cfae-4330-83e3-56fc17913402\") " pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" Nov 24 06:47:00.470088 kubelet[2689]: I1124 06:47:00.469985 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtbvl\" (UniqueName: \"kubernetes.io/projected/74685ae7-a0a3-452c-92e5-934da9ec5504-kube-api-access-vtbvl\") pod \"goldmane-7c778bb748-dw5sv\" (UID: \"74685ae7-a0a3-452c-92e5-934da9ec5504\") " pod="calico-system/goldmane-7c778bb748-dw5sv" Nov 24 06:47:00.470088 kubelet[2689]: I1124 06:47:00.470005 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wm7x\" (UniqueName: \"kubernetes.io/projected/598e36cd-b984-4495-bdd1-88d1ae40f5c0-kube-api-access-4wm7x\") pod \"coredns-66bc5c9577-52wxk\" (UID: \"598e36cd-b984-4495-bdd1-88d1ae40f5c0\") " pod="kube-system/coredns-66bc5c9577-52wxk" Nov 24 06:47:00.470088 kubelet[2689]: I1124 06:47:00.470025 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c1ea070-2521-4273-b2b3-9736eaffd427-config-volume\") pod \"coredns-66bc5c9577-2s5k6\" (UID: \"7c1ea070-2521-4273-b2b3-9736eaffd427\") " pod="kube-system/coredns-66bc5c9577-2s5k6" Nov 24 06:47:00.470213 kubelet[2689]: I1124 06:47:00.470045 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-backend-key-pair\") pod \"whisker-dc75548b7-56ght\" (UID: \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\") " pod="calico-system/whisker-dc75548b7-56ght" Nov 24 06:47:00.470213 kubelet[2689]: I1124 06:47:00.470064 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdkvk\" (UniqueName: \"kubernetes.io/projected/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-kube-api-access-qdkvk\") pod \"whisker-dc75548b7-56ght\" (UID: \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\") " pod="calico-system/whisker-dc75548b7-56ght" Nov 24 06:47:00.470213 kubelet[2689]: I1124 06:47:00.470082 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cff648c4-cfae-4330-83e3-56fc17913402-calico-apiserver-certs\") pod \"calico-apiserver-768c95b4f7-bql9j\" (UID: \"cff648c4-cfae-4330-83e3-56fc17913402\") " pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" Nov 24 06:47:00.470213 kubelet[2689]: I1124 06:47:00.470104 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtw8t\" (UniqueName: \"kubernetes.io/projected/86381663-d21f-4c14-bc69-3f80735f20fe-kube-api-access-wtw8t\") pod \"calico-apiserver-7785576499-wrvhp\" (UID: \"86381663-d21f-4c14-bc69-3f80735f20fe\") " pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" Nov 24 06:47:00.470213 kubelet[2689]: I1124 06:47:00.470131 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74685ae7-a0a3-452c-92e5-934da9ec5504-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-dw5sv\" (UID: \"74685ae7-a0a3-452c-92e5-934da9ec5504\") " pod="calico-system/goldmane-7c778bb748-dw5sv" Nov 24 06:47:00.470331 kubelet[2689]: I1124 06:47:00.470150 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scqpt\" (UniqueName: \"kubernetes.io/projected/e17ed833-91af-49ab-901e-293fc1161607-kube-api-access-scqpt\") pod \"calico-apiserver-7785576499-vjzwh\" (UID: \"e17ed833-91af-49ab-901e-293fc1161607\") " pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" Nov 24 06:47:00.470331 kubelet[2689]: I1124 06:47:00.470173 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f-tigera-ca-bundle\") pod \"calico-kube-controllers-596cc5c64b-j6f7z\" (UID: \"d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f\") " pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" Nov 24 06:47:00.470331 kubelet[2689]: I1124 06:47:00.470189 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74685ae7-a0a3-452c-92e5-934da9ec5504-config\") pod \"goldmane-7c778bb748-dw5sv\" (UID: \"74685ae7-a0a3-452c-92e5-934da9ec5504\") " pod="calico-system/goldmane-7c778bb748-dw5sv" Nov 24 06:47:00.470331 kubelet[2689]: I1124 06:47:00.470219 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/74685ae7-a0a3-452c-92e5-934da9ec5504-goldmane-key-pair\") pod \"goldmane-7c778bb748-dw5sv\" (UID: \"74685ae7-a0a3-452c-92e5-934da9ec5504\") " pod="calico-system/goldmane-7c778bb748-dw5sv" Nov 24 06:47:00.470331 kubelet[2689]: I1124 06:47:00.470245 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sltvl\" (UniqueName: \"kubernetes.io/projected/7c1ea070-2521-4273-b2b3-9736eaffd427-kube-api-access-sltvl\") pod \"coredns-66bc5c9577-2s5k6\" (UID: \"7c1ea070-2521-4273-b2b3-9736eaffd427\") " pod="kube-system/coredns-66bc5c9577-2s5k6" Nov 24 06:47:00.470462 kubelet[2689]: I1124 06:47:00.470265 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm8r5\" (UniqueName: \"kubernetes.io/projected/d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f-kube-api-access-wm8r5\") pod \"calico-kube-controllers-596cc5c64b-j6f7z\" (UID: \"d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f\") " pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" Nov 24 06:47:00.470462 kubelet[2689]: I1124 06:47:00.470285 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-ca-bundle\") pod \"whisker-dc75548b7-56ght\" (UID: \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\") " pod="calico-system/whisker-dc75548b7-56ght" Nov 24 06:47:00.723199 containerd[1544]: time="2025-11-24T06:47:00.723144302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596cc5c64b-j6f7z,Uid:d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:00.731274 containerd[1544]: time="2025-11-24T06:47:00.731223706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-52wxk,Uid:598e36cd-b984-4495-bdd1-88d1ae40f5c0,Namespace:kube-system,Attempt:0,}" Nov 24 06:47:00.749428 containerd[1544]: time="2025-11-24T06:47:00.749331484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2s5k6,Uid:7c1ea070-2521-4273-b2b3-9736eaffd427,Namespace:kube-system,Attempt:0,}" Nov 24 06:47:00.749676 containerd[1544]: time="2025-11-24T06:47:00.749642241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-wrvhp,Uid:86381663-d21f-4c14-bc69-3f80735f20fe,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:00.754881 containerd[1544]: time="2025-11-24T06:47:00.754841528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-vjzwh,Uid:e17ed833-91af-49ab-901e-293fc1161607,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:00.760180 containerd[1544]: time="2025-11-24T06:47:00.760142906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc75548b7-56ght,Uid:181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:00.770315 containerd[1544]: time="2025-11-24T06:47:00.770242875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768c95b4f7-bql9j,Uid:cff648c4-cfae-4330-83e3-56fc17913402,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:00.770964 containerd[1544]: time="2025-11-24T06:47:00.770948798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dw5sv,Uid:74685ae7-a0a3-452c-92e5-934da9ec5504,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:00.796248 systemd[1]: Created slice kubepods-besteffort-podde4b2b6f_2e35_454b_b826_35c899986b61.slice - libcontainer container kubepods-besteffort-podde4b2b6f_2e35_454b_b826_35c899986b61.slice. Nov 24 06:47:00.810711 containerd[1544]: time="2025-11-24T06:47:00.810667678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z57g,Uid:de4b2b6f-2e35-454b-b826-35c899986b61,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:00.872392 containerd[1544]: time="2025-11-24T06:47:00.872251410Z" level=error msg="Failed to destroy network for sandbox \"e6ea5cbb0c2be6b3bbf2e0f642a42102ca63ad59b9033ccfcff61c2c653f55ac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.886232 containerd[1544]: time="2025-11-24T06:47:00.886171751Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-52wxk,Uid:598e36cd-b984-4495-bdd1-88d1ae40f5c0,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea5cbb0c2be6b3bbf2e0f642a42102ca63ad59b9033ccfcff61c2c653f55ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.893971 kubelet[2689]: E1124 06:47:00.892876 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea5cbb0c2be6b3bbf2e0f642a42102ca63ad59b9033ccfcff61c2c653f55ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.893971 kubelet[2689]: E1124 06:47:00.892940 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea5cbb0c2be6b3bbf2e0f642a42102ca63ad59b9033ccfcff61c2c653f55ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-52wxk" Nov 24 06:47:00.893971 kubelet[2689]: E1124 06:47:00.892956 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e6ea5cbb0c2be6b3bbf2e0f642a42102ca63ad59b9033ccfcff61c2c653f55ac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-52wxk" Nov 24 06:47:00.894158 kubelet[2689]: E1124 06:47:00.893008 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-52wxk_kube-system(598e36cd-b984-4495-bdd1-88d1ae40f5c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-52wxk_kube-system(598e36cd-b984-4495-bdd1-88d1ae40f5c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e6ea5cbb0c2be6b3bbf2e0f642a42102ca63ad59b9033ccfcff61c2c653f55ac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-52wxk" podUID="598e36cd-b984-4495-bdd1-88d1ae40f5c0" Nov 24 06:47:00.894343 containerd[1544]: time="2025-11-24T06:47:00.894201121Z" level=error msg="Failed to destroy network for sandbox \"f70412080c8cd412f375b6ac003dfcbdd8d74be686edf88cf6a4eb2665e9c063\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.894640 containerd[1544]: time="2025-11-24T06:47:00.894613289Z" level=error msg="Failed to destroy network for sandbox \"a6360f1371f3c16de4bdb09f3564940b915a7c4580779a74a793695999ee9fa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.897062 containerd[1544]: time="2025-11-24T06:47:00.897007350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2s5k6,Uid:7c1ea070-2521-4273-b2b3-9736eaffd427,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70412080c8cd412f375b6ac003dfcbdd8d74be686edf88cf6a4eb2665e9c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.897507 kubelet[2689]: E1124 06:47:00.897478 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70412080c8cd412f375b6ac003dfcbdd8d74be686edf88cf6a4eb2665e9c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.897583 kubelet[2689]: E1124 06:47:00.897510 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70412080c8cd412f375b6ac003dfcbdd8d74be686edf88cf6a4eb2665e9c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2s5k6" Nov 24 06:47:00.897583 kubelet[2689]: E1124 06:47:00.897525 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70412080c8cd412f375b6ac003dfcbdd8d74be686edf88cf6a4eb2665e9c063\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2s5k6" Nov 24 06:47:00.897583 kubelet[2689]: E1124 06:47:00.897574 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2s5k6_kube-system(7c1ea070-2521-4273-b2b3-9736eaffd427)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2s5k6_kube-system(7c1ea070-2521-4273-b2b3-9736eaffd427)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70412080c8cd412f375b6ac003dfcbdd8d74be686edf88cf6a4eb2665e9c063\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2s5k6" podUID="7c1ea070-2521-4273-b2b3-9736eaffd427" Nov 24 06:47:00.898653 containerd[1544]: time="2025-11-24T06:47:00.898582784Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-wrvhp,Uid:86381663-d21f-4c14-bc69-3f80735f20fe,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6360f1371f3c16de4bdb09f3564940b915a7c4580779a74a793695999ee9fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.898790 kubelet[2689]: E1124 06:47:00.898743 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6360f1371f3c16de4bdb09f3564940b915a7c4580779a74a793695999ee9fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.898872 kubelet[2689]: E1124 06:47:00.898807 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6360f1371f3c16de4bdb09f3564940b915a7c4580779a74a793695999ee9fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" Nov 24 06:47:00.898872 kubelet[2689]: E1124 06:47:00.898844 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6360f1371f3c16de4bdb09f3564940b915a7c4580779a74a793695999ee9fa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" Nov 24 06:47:00.898925 kubelet[2689]: E1124 06:47:00.898896 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7785576499-wrvhp_calico-apiserver(86381663-d21f-4c14-bc69-3f80735f20fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7785576499-wrvhp_calico-apiserver(86381663-d21f-4c14-bc69-3f80735f20fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6360f1371f3c16de4bdb09f3564940b915a7c4580779a74a793695999ee9fa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" podUID="86381663-d21f-4c14-bc69-3f80735f20fe" Nov 24 06:47:00.911864 containerd[1544]: time="2025-11-24T06:47:00.911248968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 24 06:47:00.926012 containerd[1544]: time="2025-11-24T06:47:00.925904729Z" level=error msg="Failed to destroy network for sandbox \"35da92bd4f0f2bbbc05d80b6cfdb7f7df4cccef00ceec955701c4f3a6dcd1474\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.927352 containerd[1544]: time="2025-11-24T06:47:00.927327245Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-vjzwh,Uid:e17ed833-91af-49ab-901e-293fc1161607,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"35da92bd4f0f2bbbc05d80b6cfdb7f7df4cccef00ceec955701c4f3a6dcd1474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.929017 containerd[1544]: time="2025-11-24T06:47:00.928998179Z" level=error msg="Failed to destroy network for sandbox \"c9743e849b4886abca286a4051c9adefe3a08c30dd61eb4cd9afedf108d8230b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.930505 containerd[1544]: time="2025-11-24T06:47:00.930481981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596cc5c64b-j6f7z,Uid:d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9743e849b4886abca286a4051c9adefe3a08c30dd61eb4cd9afedf108d8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.931997 kubelet[2689]: E1124 06:47:00.931638 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9743e849b4886abca286a4051c9adefe3a08c30dd61eb4cd9afedf108d8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.931997 kubelet[2689]: E1124 06:47:00.931698 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9743e849b4886abca286a4051c9adefe3a08c30dd61eb4cd9afedf108d8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" Nov 24 06:47:00.931997 kubelet[2689]: E1124 06:47:00.931715 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9743e849b4886abca286a4051c9adefe3a08c30dd61eb4cd9afedf108d8230b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" Nov 24 06:47:00.931997 kubelet[2689]: E1124 06:47:00.931650 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35da92bd4f0f2bbbc05d80b6cfdb7f7df4cccef00ceec955701c4f3a6dcd1474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.932142 kubelet[2689]: E1124 06:47:00.931763 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596cc5c64b-j6f7z_calico-system(d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596cc5c64b-j6f7z_calico-system(d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9743e849b4886abca286a4051c9adefe3a08c30dd61eb4cd9afedf108d8230b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" podUID="d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f" Nov 24 06:47:00.932142 kubelet[2689]: E1124 06:47:00.931809 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35da92bd4f0f2bbbc05d80b6cfdb7f7df4cccef00ceec955701c4f3a6dcd1474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" Nov 24 06:47:00.932142 kubelet[2689]: E1124 06:47:00.931839 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35da92bd4f0f2bbbc05d80b6cfdb7f7df4cccef00ceec955701c4f3a6dcd1474\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" Nov 24 06:47:00.932237 kubelet[2689]: E1124 06:47:00.931893 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7785576499-vjzwh_calico-apiserver(e17ed833-91af-49ab-901e-293fc1161607)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7785576499-vjzwh_calico-apiserver(e17ed833-91af-49ab-901e-293fc1161607)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35da92bd4f0f2bbbc05d80b6cfdb7f7df4cccef00ceec955701c4f3a6dcd1474\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:00.947046 containerd[1544]: time="2025-11-24T06:47:00.946985991Z" level=error msg="Failed to destroy network for sandbox \"0a7a1e0ecd65d9b78fa1f6c7d1e78c31acb41ba22a108a7ef0fa30b4fca5c92d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.948679 containerd[1544]: time="2025-11-24T06:47:00.948654270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-dc75548b7-56ght,Uid:181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7a1e0ecd65d9b78fa1f6c7d1e78c31acb41ba22a108a7ef0fa30b4fca5c92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.949037 kubelet[2689]: E1124 06:47:00.948993 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7a1e0ecd65d9b78fa1f6c7d1e78c31acb41ba22a108a7ef0fa30b4fca5c92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.949209 kubelet[2689]: E1124 06:47:00.949056 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7a1e0ecd65d9b78fa1f6c7d1e78c31acb41ba22a108a7ef0fa30b4fca5c92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc75548b7-56ght" Nov 24 06:47:00.949209 kubelet[2689]: E1124 06:47:00.949074 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7a1e0ecd65d9b78fa1f6c7d1e78c31acb41ba22a108a7ef0fa30b4fca5c92d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-dc75548b7-56ght" Nov 24 06:47:00.949209 kubelet[2689]: E1124 06:47:00.949138 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-dc75548b7-56ght_calico-system(181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-dc75548b7-56ght_calico-system(181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a7a1e0ecd65d9b78fa1f6c7d1e78c31acb41ba22a108a7ef0fa30b4fca5c92d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-dc75548b7-56ght" podUID="181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e" Nov 24 06:47:00.958763 containerd[1544]: time="2025-11-24T06:47:00.958723011Z" level=error msg="Failed to destroy network for sandbox \"6b228af92fe84442ec09ddf2ed7157725d3639adeb3770abe35035fe0b85f10c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.960454 containerd[1544]: time="2025-11-24T06:47:00.960402922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z57g,Uid:de4b2b6f-2e35-454b-b826-35c899986b61,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b228af92fe84442ec09ddf2ed7157725d3639adeb3770abe35035fe0b85f10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.960672 containerd[1544]: time="2025-11-24T06:47:00.960609101Z" level=error msg="Failed to destroy network for sandbox \"aa95342c2c5f72f37b604ebc48389a9551bf9f78a3fb19f521056c1546b4eecc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.960826 kubelet[2689]: E1124 06:47:00.960793 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b228af92fe84442ec09ddf2ed7157725d3639adeb3770abe35035fe0b85f10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.960937 kubelet[2689]: E1124 06:47:00.960838 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b228af92fe84442ec09ddf2ed7157725d3639adeb3770abe35035fe0b85f10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8z57g" Nov 24 06:47:00.960937 kubelet[2689]: E1124 06:47:00.960869 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b228af92fe84442ec09ddf2ed7157725d3639adeb3770abe35035fe0b85f10c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8z57g" Nov 24 06:47:00.961000 kubelet[2689]: E1124 06:47:00.960926 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b228af92fe84442ec09ddf2ed7157725d3639adeb3770abe35035fe0b85f10c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:47:00.962023 containerd[1544]: time="2025-11-24T06:47:00.961981703Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768c95b4f7-bql9j,Uid:cff648c4-cfae-4330-83e3-56fc17913402,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa95342c2c5f72f37b604ebc48389a9551bf9f78a3fb19f521056c1546b4eecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.962413 kubelet[2689]: E1124 06:47:00.962164 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa95342c2c5f72f37b604ebc48389a9551bf9f78a3fb19f521056c1546b4eecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.962413 kubelet[2689]: E1124 06:47:00.962191 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa95342c2c5f72f37b604ebc48389a9551bf9f78a3fb19f521056c1546b4eecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" Nov 24 06:47:00.962413 kubelet[2689]: E1124 06:47:00.962216 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa95342c2c5f72f37b604ebc48389a9551bf9f78a3fb19f521056c1546b4eecc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" Nov 24 06:47:00.962527 kubelet[2689]: E1124 06:47:00.962250 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-768c95b4f7-bql9j_calico-apiserver(cff648c4-cfae-4330-83e3-56fc17913402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-768c95b4f7-bql9j_calico-apiserver(cff648c4-cfae-4330-83e3-56fc17913402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa95342c2c5f72f37b604ebc48389a9551bf9f78a3fb19f521056c1546b4eecc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:00.966211 containerd[1544]: time="2025-11-24T06:47:00.965968219Z" level=error msg="Failed to destroy network for sandbox \"a10033e15e3f5c19f8f91188f53c095f0c7e6b8501c9c292c518e985b8a7f45d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.967271 containerd[1544]: time="2025-11-24T06:47:00.967235021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dw5sv,Uid:74685ae7-a0a3-452c-92e5-934da9ec5504,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a10033e15e3f5c19f8f91188f53c095f0c7e6b8501c9c292c518e985b8a7f45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.967408 kubelet[2689]: E1124 06:47:00.967372 2689 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a10033e15e3f5c19f8f91188f53c095f0c7e6b8501c9c292c518e985b8a7f45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 24 06:47:00.967455 kubelet[2689]: E1124 06:47:00.967407 2689 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a10033e15e3f5c19f8f91188f53c095f0c7e6b8501c9c292c518e985b8a7f45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dw5sv" Nov 24 06:47:00.967514 kubelet[2689]: E1124 06:47:00.967422 2689 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a10033e15e3f5c19f8f91188f53c095f0c7e6b8501c9c292c518e985b8a7f45d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-dw5sv" Nov 24 06:47:00.967589 kubelet[2689]: E1124 06:47:00.967545 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-dw5sv_calico-system(74685ae7-a0a3-452c-92e5-934da9ec5504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-dw5sv_calico-system(74685ae7-a0a3-452c-92e5-934da9ec5504)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a10033e15e3f5c19f8f91188f53c095f0c7e6b8501c9c292c518e985b8a7f45d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:01.578471 systemd[1]: run-netns-cni\x2dd642fba8\x2d096e\x2d52a7\x2d4cb5\x2dbf4dc5a34a29.mount: Deactivated successfully. Nov 24 06:47:01.578594 systemd[1]: run-netns-cni\x2de166055b\x2d047e\x2d697f\x2d3d71\x2d774665b1b408.mount: Deactivated successfully. Nov 24 06:47:05.721465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2307997619.mount: Deactivated successfully. Nov 24 06:47:06.226410 containerd[1544]: time="2025-11-24T06:47:06.226340354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:47:06.227284 containerd[1544]: time="2025-11-24T06:47:06.227237997Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 24 06:47:06.228462 containerd[1544]: time="2025-11-24T06:47:06.228412521Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:47:06.230310 containerd[1544]: time="2025-11-24T06:47:06.230271377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 24 06:47:06.230857 containerd[1544]: time="2025-11-24T06:47:06.230817978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 5.319531879s" Nov 24 06:47:06.230919 containerd[1544]: time="2025-11-24T06:47:06.230858303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 24 06:47:06.250692 containerd[1544]: time="2025-11-24T06:47:06.250653328Z" level=info msg="CreateContainer within sandbox \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 24 06:47:06.265285 containerd[1544]: time="2025-11-24T06:47:06.265229402Z" level=info msg="Container 1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:47:06.277995 containerd[1544]: time="2025-11-24T06:47:06.277947984Z" level=info msg="CreateContainer within sandbox \"2898da3c7082606675e8d94f0b5f5cfd98ba5e15a5be4e04f8864ee0a60a9b0b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244\"" Nov 24 06:47:06.279462 containerd[1544]: time="2025-11-24T06:47:06.278476089Z" level=info msg="StartContainer for \"1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244\"" Nov 24 06:47:06.280275 containerd[1544]: time="2025-11-24T06:47:06.280240367Z" level=info msg="connecting to shim 1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244" address="unix:///run/containerd/s/a868de683298aca8da0efa385585e7f28a8a19cd0604fcd0a7bd5ba81248f964" protocol=ttrpc version=3 Nov 24 06:47:06.301716 systemd[1]: Started cri-containerd-1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244.scope - libcontainer container 1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244. Nov 24 06:47:06.386836 containerd[1544]: time="2025-11-24T06:47:06.386788129Z" level=info msg="StartContainer for \"1924127792e16948613ecc8c85b82faea358675fe2d97ecb5af52e8bd780a244\" returns successfully" Nov 24 06:47:06.459266 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 24 06:47:06.460127 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 24 06:47:06.711048 kubelet[2689]: I1124 06:47:06.710414 2689 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-backend-key-pair\") pod \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\" (UID: \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\") " Nov 24 06:47:06.711048 kubelet[2689]: I1124 06:47:06.710474 2689 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-ca-bundle\") pod \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\" (UID: \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\") " Nov 24 06:47:06.711048 kubelet[2689]: I1124 06:47:06.710509 2689 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdkvk\" (UniqueName: \"kubernetes.io/projected/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-kube-api-access-qdkvk\") pod \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\" (UID: \"181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e\") " Nov 24 06:47:06.711942 kubelet[2689]: I1124 06:47:06.711814 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e" (UID: "181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 24 06:47:06.714536 kubelet[2689]: I1124 06:47:06.714491 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e" (UID: "181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 24 06:47:06.715005 kubelet[2689]: I1124 06:47:06.714955 2689 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-kube-api-access-qdkvk" (OuterVolumeSpecName: "kube-api-access-qdkvk") pod "181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e" (UID: "181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e"). InnerVolumeSpecName "kube-api-access-qdkvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 24 06:47:06.722740 systemd[1]: var-lib-kubelet-pods-181422e2\x2d5cc7\x2d4e92\x2dbf3a\x2d1c5d7cb34c3e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqdkvk.mount: Deactivated successfully. Nov 24 06:47:06.722891 systemd[1]: var-lib-kubelet-pods-181422e2\x2d5cc7\x2d4e92\x2dbf3a\x2d1c5d7cb34c3e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 24 06:47:06.793340 systemd[1]: Removed slice kubepods-besteffort-pod181422e2_5cc7_4e92_bf3a_1c5d7cb34c3e.slice - libcontainer container kubepods-besteffort-pod181422e2_5cc7_4e92_bf3a_1c5d7cb34c3e.slice. Nov 24 06:47:06.811505 kubelet[2689]: I1124 06:47:06.811467 2689 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdkvk\" (UniqueName: \"kubernetes.io/projected/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-kube-api-access-qdkvk\") on node \"localhost\" DevicePath \"\"" Nov 24 06:47:06.811505 kubelet[2689]: I1124 06:47:06.811499 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 24 06:47:06.811505 kubelet[2689]: I1124 06:47:06.811510 2689 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 24 06:47:06.947272 kubelet[2689]: I1124 06:47:06.947208 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-d585m" podStartSLOduration=1.6506821980000002 podStartE2EDuration="14.947193528s" podCreationTimestamp="2025-11-24 06:46:52 +0000 UTC" firstStartedPulling="2025-11-24 06:46:52.934966091 +0000 UTC m=+20.229617789" lastFinishedPulling="2025-11-24 06:47:06.231477421 +0000 UTC m=+33.526129119" observedRunningTime="2025-11-24 06:47:06.946617773 +0000 UTC m=+34.241269471" watchObservedRunningTime="2025-11-24 06:47:06.947193528 +0000 UTC m=+34.241845216" Nov 24 06:47:06.990356 systemd[1]: Created slice kubepods-besteffort-podacd6d1b4_5631_405d_9c2a_2cf6826dc7b1.slice - libcontainer container kubepods-besteffort-podacd6d1b4_5631_405d_9c2a_2cf6826dc7b1.slice. Nov 24 06:47:07.113383 kubelet[2689]: I1124 06:47:07.113338 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zg68k\" (UniqueName: \"kubernetes.io/projected/acd6d1b4-5631-405d-9c2a-2cf6826dc7b1-kube-api-access-zg68k\") pod \"whisker-55b85594bc-ngknk\" (UID: \"acd6d1b4-5631-405d-9c2a-2cf6826dc7b1\") " pod="calico-system/whisker-55b85594bc-ngknk" Nov 24 06:47:07.113383 kubelet[2689]: I1124 06:47:07.113381 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/acd6d1b4-5631-405d-9c2a-2cf6826dc7b1-whisker-backend-key-pair\") pod \"whisker-55b85594bc-ngknk\" (UID: \"acd6d1b4-5631-405d-9c2a-2cf6826dc7b1\") " pod="calico-system/whisker-55b85594bc-ngknk" Nov 24 06:47:07.113383 kubelet[2689]: I1124 06:47:07.113402 2689 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acd6d1b4-5631-405d-9c2a-2cf6826dc7b1-whisker-ca-bundle\") pod \"whisker-55b85594bc-ngknk\" (UID: \"acd6d1b4-5631-405d-9c2a-2cf6826dc7b1\") " pod="calico-system/whisker-55b85594bc-ngknk" Nov 24 06:47:07.296213 containerd[1544]: time="2025-11-24T06:47:07.296088425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b85594bc-ngknk,Uid:acd6d1b4-5631-405d-9c2a-2cf6826dc7b1,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:07.437328 systemd-networkd[1449]: cali2ccafe23667: Link UP Nov 24 06:47:07.438126 systemd-networkd[1449]: cali2ccafe23667: Gained carrier Nov 24 06:47:07.450233 containerd[1544]: 2025-11-24 06:47:07.320 [INFO][3935] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 24 06:47:07.450233 containerd[1544]: 2025-11-24 06:47:07.337 [INFO][3935] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--55b85594bc--ngknk-eth0 whisker-55b85594bc- calico-system acd6d1b4-5631-405d-9c2a-2cf6826dc7b1 899 0 2025-11-24 06:47:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55b85594bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-55b85594bc-ngknk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2ccafe23667 [] [] }} ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-" Nov 24 06:47:07.450233 containerd[1544]: 2025-11-24 06:47:07.337 [INFO][3935] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.450233 containerd[1544]: 2025-11-24 06:47:07.397 [INFO][3950] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" HandleID="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Workload="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.398 [INFO][3950] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" HandleID="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Workload="localhost-k8s-whisker--55b85594bc--ngknk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-55b85594bc-ngknk", "timestamp":"2025-11-24 06:47:07.397842055 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.399 [INFO][3950] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.399 [INFO][3950] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.399 [INFO][3950] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.406 [INFO][3950] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" host="localhost" Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.411 [INFO][3950] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.414 [INFO][3950] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.416 [INFO][3950] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.418 [INFO][3950] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:07.450458 containerd[1544]: 2025-11-24 06:47:07.418 [INFO][3950] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" host="localhost" Nov 24 06:47:07.450692 containerd[1544]: 2025-11-24 06:47:07.419 [INFO][3950] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e Nov 24 06:47:07.450692 containerd[1544]: 2025-11-24 06:47:07.423 [INFO][3950] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" host="localhost" Nov 24 06:47:07.450692 containerd[1544]: 2025-11-24 06:47:07.427 [INFO][3950] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" host="localhost" Nov 24 06:47:07.450692 containerd[1544]: 2025-11-24 06:47:07.427 [INFO][3950] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" host="localhost" Nov 24 06:47:07.450692 containerd[1544]: 2025-11-24 06:47:07.427 [INFO][3950] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:07.450692 containerd[1544]: 2025-11-24 06:47:07.427 [INFO][3950] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" HandleID="k8s-pod-network.995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Workload="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.450807 containerd[1544]: 2025-11-24 06:47:07.430 [INFO][3935] cni-plugin/k8s.go 418: Populated endpoint ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55b85594bc--ngknk-eth0", GenerateName:"whisker-55b85594bc-", Namespace:"calico-system", SelfLink:"", UID:"acd6d1b4-5631-405d-9c2a-2cf6826dc7b1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 47, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55b85594bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-55b85594bc-ngknk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ccafe23667", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:07.450807 containerd[1544]: 2025-11-24 06:47:07.430 [INFO][3935] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.450876 containerd[1544]: 2025-11-24 06:47:07.430 [INFO][3935] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ccafe23667 ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.450876 containerd[1544]: 2025-11-24 06:47:07.437 [INFO][3935] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.450925 containerd[1544]: 2025-11-24 06:47:07.438 [INFO][3935] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--55b85594bc--ngknk-eth0", GenerateName:"whisker-55b85594bc-", Namespace:"calico-system", SelfLink:"", UID:"acd6d1b4-5631-405d-9c2a-2cf6826dc7b1", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 47, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55b85594bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e", Pod:"whisker-55b85594bc-ngknk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2ccafe23667", MAC:"1a:15:41:d2:b5:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:07.450973 containerd[1544]: 2025-11-24 06:47:07.446 [INFO][3935] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" Namespace="calico-system" Pod="whisker-55b85594bc-ngknk" WorkloadEndpoint="localhost-k8s-whisker--55b85594bc--ngknk-eth0" Nov 24 06:47:07.496723 containerd[1544]: time="2025-11-24T06:47:07.496661984Z" level=info msg="connecting to shim 995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e" address="unix:///run/containerd/s/7c0d06c94e642a31640ad25a60393480dc599f8611f617a98c493dc67abe51ce" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:07.529658 systemd[1]: Started cri-containerd-995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e.scope - libcontainer container 995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e. Nov 24 06:47:07.543071 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:07.572677 containerd[1544]: time="2025-11-24T06:47:07.572543038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b85594bc-ngknk,Uid:acd6d1b4-5631-405d-9c2a-2cf6826dc7b1,Namespace:calico-system,Attempt:0,} returns sandbox id \"995ecbce2aced1ffccece469db79bbdb35411e13dca93682a592ea5449b8412e\"" Nov 24 06:47:07.574823 containerd[1544]: time="2025-11-24T06:47:07.574789363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:47:07.919165 containerd[1544]: time="2025-11-24T06:47:07.918936371Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:07.979082 containerd[1544]: time="2025-11-24T06:47:07.977883230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:47:07.998540 containerd[1544]: time="2025-11-24T06:47:07.998484758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:47:08.000665 kubelet[2689]: E1124 06:47:08.000585 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:08.001272 kubelet[2689]: E1124 06:47:08.001198 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:08.001676 kubelet[2689]: E1124 06:47:08.001643 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-55b85594bc-ngknk_calico-system(acd6d1b4-5631-405d-9c2a-2cf6826dc7b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:08.002668 containerd[1544]: time="2025-11-24T06:47:08.002631315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:47:08.256289 systemd-networkd[1449]: vxlan.calico: Link UP Nov 24 06:47:08.256300 systemd-networkd[1449]: vxlan.calico: Gained carrier Nov 24 06:47:08.390580 containerd[1544]: time="2025-11-24T06:47:08.390538674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:08.391566 containerd[1544]: time="2025-11-24T06:47:08.391538048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:47:08.391685 containerd[1544]: time="2025-11-24T06:47:08.391623389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:08.391812 kubelet[2689]: E1124 06:47:08.391762 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:08.391855 kubelet[2689]: E1124 06:47:08.391814 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:08.391915 kubelet[2689]: E1124 06:47:08.391890 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-55b85594bc-ngknk_calico-system(acd6d1b4-5631-405d-9c2a-2cf6826dc7b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:08.391982 kubelet[2689]: E1124 06:47:08.391932 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55b85594bc-ngknk" podUID="acd6d1b4-5631-405d-9c2a-2cf6826dc7b1" Nov 24 06:47:08.460574 systemd-networkd[1449]: cali2ccafe23667: Gained IPv6LL Nov 24 06:47:08.789787 kubelet[2689]: I1124 06:47:08.789736 2689 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e" path="/var/lib/kubelet/pods/181422e2-5cc7-4e92-bf3a-1c5d7cb34c3e/volumes" Nov 24 06:47:08.938993 kubelet[2689]: E1124 06:47:08.938942 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55b85594bc-ngknk" podUID="acd6d1b4-5631-405d-9c2a-2cf6826dc7b1" Nov 24 06:47:09.292584 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Nov 24 06:47:11.140525 systemd[1]: Started sshd@7-10.0.0.32:22-10.0.0.1:57234.service - OpenSSH per-connection server daemon (10.0.0.1:57234). Nov 24 06:47:11.199564 sshd[4270]: Accepted publickey for core from 10.0.0.1 port 57234 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:11.201174 sshd-session[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:11.205343 systemd-logind[1532]: New session 8 of user core. Nov 24 06:47:11.210567 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 24 06:47:11.350353 sshd[4273]: Connection closed by 10.0.0.1 port 57234 Nov 24 06:47:11.350596 sshd-session[4270]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:11.355132 systemd[1]: sshd@7-10.0.0.32:22-10.0.0.1:57234.service: Deactivated successfully. Nov 24 06:47:11.357176 systemd[1]: session-8.scope: Deactivated successfully. Nov 24 06:47:11.358060 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Nov 24 06:47:11.359932 systemd-logind[1532]: Removed session 8. Nov 24 06:47:11.792230 containerd[1544]: time="2025-11-24T06:47:11.792165432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-wrvhp,Uid:86381663-d21f-4c14-bc69-3f80735f20fe,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:11.793736 containerd[1544]: time="2025-11-24T06:47:11.793712446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-vjzwh,Uid:e17ed833-91af-49ab-901e-293fc1161607,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:11.797099 containerd[1544]: time="2025-11-24T06:47:11.797080082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-52wxk,Uid:598e36cd-b984-4495-bdd1-88d1ae40f5c0,Namespace:kube-system,Attempt:0,}" Nov 24 06:47:11.799422 containerd[1544]: time="2025-11-24T06:47:11.799387900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z57g,Uid:de4b2b6f-2e35-454b-b826-35c899986b61,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:11.953508 systemd-networkd[1449]: calia0cb5fe0c5f: Link UP Nov 24 06:47:11.953683 systemd-networkd[1449]: calia0cb5fe0c5f: Gained carrier Nov 24 06:47:11.968145 containerd[1544]: 2025-11-24 06:47:11.890 [INFO][4286] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0 calico-apiserver-7785576499- calico-apiserver e17ed833-91af-49ab-901e-293fc1161607 830 0 2025-11-24 06:46:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7785576499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7785576499-vjzwh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia0cb5fe0c5f [] [] }} ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-" Nov 24 06:47:11.968145 containerd[1544]: 2025-11-24 06:47:11.890 [INFO][4286] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.968145 containerd[1544]: 2025-11-24 06:47:11.919 [INFO][4344] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" HandleID="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Workload="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.919 [INFO][4344] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" HandleID="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Workload="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7785576499-vjzwh", "timestamp":"2025-11-24 06:47:11.919059667 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.919 [INFO][4344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.919 [INFO][4344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.919 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.925 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" host="localhost" Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.931 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.934 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.935 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.936 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:11.968304 containerd[1544]: 2025-11-24 06:47:11.936 [INFO][4344] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" host="localhost" Nov 24 06:47:11.968946 containerd[1544]: 2025-11-24 06:47:11.939 [INFO][4344] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808 Nov 24 06:47:11.968946 containerd[1544]: 2025-11-24 06:47:11.941 [INFO][4344] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" host="localhost" Nov 24 06:47:11.968946 containerd[1544]: 2025-11-24 06:47:11.947 [INFO][4344] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" host="localhost" Nov 24 06:47:11.968946 containerd[1544]: 2025-11-24 06:47:11.947 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" host="localhost" Nov 24 06:47:11.968946 containerd[1544]: 2025-11-24 06:47:11.947 [INFO][4344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:11.968946 containerd[1544]: 2025-11-24 06:47:11.947 [INFO][4344] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" HandleID="k8s-pod-network.ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Workload="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.969241 containerd[1544]: 2025-11-24 06:47:11.951 [INFO][4286] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0", GenerateName:"calico-apiserver-7785576499-", Namespace:"calico-apiserver", SelfLink:"", UID:"e17ed833-91af-49ab-901e-293fc1161607", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7785576499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7785576499-vjzwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0cb5fe0c5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:11.969301 containerd[1544]: 2025-11-24 06:47:11.951 [INFO][4286] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.969301 containerd[1544]: 2025-11-24 06:47:11.951 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0cb5fe0c5f ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.969301 containerd[1544]: 2025-11-24 06:47:11.953 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.969367 containerd[1544]: 2025-11-24 06:47:11.953 [INFO][4286] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0", GenerateName:"calico-apiserver-7785576499-", Namespace:"calico-apiserver", SelfLink:"", UID:"e17ed833-91af-49ab-901e-293fc1161607", ResourceVersion:"830", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7785576499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808", Pod:"calico-apiserver-7785576499-vjzwh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia0cb5fe0c5f", MAC:"da:52:d9:6f:88:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:11.969510 containerd[1544]: 2025-11-24 06:47:11.962 [INFO][4286] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-vjzwh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--vjzwh-eth0" Nov 24 06:47:11.991800 containerd[1544]: time="2025-11-24T06:47:11.991612275Z" level=info msg="connecting to shim ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808" address="unix:///run/containerd/s/0f3799320e70be265cfcbd4356f3b995e6d92823af0a20820bf86aa6c6d56f6c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:12.021590 systemd[1]: Started cri-containerd-ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808.scope - libcontainer container ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808. Nov 24 06:47:12.038577 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:12.056120 systemd-networkd[1449]: cali3f4a6c49b71: Link UP Nov 24 06:47:12.056764 systemd-networkd[1449]: cali3f4a6c49b71: Gained carrier Nov 24 06:47:12.073124 containerd[1544]: 2025-11-24 06:47:11.887 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8z57g-eth0 csi-node-driver- calico-system de4b2b6f-2e35-454b-b826-35c899986b61 714 0 2025-11-24 06:46:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8z57g eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3f4a6c49b71 [] [] }} ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-" Nov 24 06:47:12.073124 containerd[1544]: 2025-11-24 06:47:11.887 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.073124 containerd[1544]: 2025-11-24 06:47:11.925 [INFO][4342] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" HandleID="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Workload="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:11.925 [INFO][4342] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" HandleID="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Workload="localhost-k8s-csi--node--driver--8z57g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034cff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8z57g", "timestamp":"2025-11-24 06:47:11.925201248 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:11.925 [INFO][4342] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:11.947 [INFO][4342] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:11.947 [INFO][4342] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:12.026 [INFO][4342] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" host="localhost" Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:12.032 [INFO][4342] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:12.035 [INFO][4342] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:12.037 [INFO][4342] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:12.038 [INFO][4342] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.073530 containerd[1544]: 2025-11-24 06:47:12.038 [INFO][4342] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" host="localhost" Nov 24 06:47:12.073847 containerd[1544]: 2025-11-24 06:47:12.042 [INFO][4342] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4 Nov 24 06:47:12.073847 containerd[1544]: 2025-11-24 06:47:12.046 [INFO][4342] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" host="localhost" Nov 24 06:47:12.073847 containerd[1544]: 2025-11-24 06:47:12.049 [INFO][4342] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" host="localhost" Nov 24 06:47:12.073847 containerd[1544]: 2025-11-24 06:47:12.049 [INFO][4342] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" host="localhost" Nov 24 06:47:12.073847 containerd[1544]: 2025-11-24 06:47:12.050 [INFO][4342] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:12.073847 containerd[1544]: 2025-11-24 06:47:12.050 [INFO][4342] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" HandleID="k8s-pod-network.0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Workload="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.074010 containerd[1544]: 2025-11-24 06:47:12.053 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8z57g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de4b2b6f-2e35-454b-b826-35c899986b61", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8z57g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f4a6c49b71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.074086 containerd[1544]: 2025-11-24 06:47:12.053 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.074086 containerd[1544]: 2025-11-24 06:47:12.053 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f4a6c49b71 ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.074086 containerd[1544]: 2025-11-24 06:47:12.058 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.074177 containerd[1544]: 2025-11-24 06:47:12.058 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8z57g-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"de4b2b6f-2e35-454b-b826-35c899986b61", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4", Pod:"csi-node-driver-8z57g", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3f4a6c49b71", MAC:"42:33:84:8a:22:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.074253 containerd[1544]: 2025-11-24 06:47:12.070 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" Namespace="calico-system" Pod="csi-node-driver-8z57g" WorkloadEndpoint="localhost-k8s-csi--node--driver--8z57g-eth0" Nov 24 06:47:12.080029 containerd[1544]: time="2025-11-24T06:47:12.079996564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-vjzwh,Uid:e17ed833-91af-49ab-901e-293fc1161607,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ab2fa89a9769700eedf06b98cc21319c7c864a0b8ff29602edcef3f6962af808\"" Nov 24 06:47:12.081717 containerd[1544]: time="2025-11-24T06:47:12.081690926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:12.117861 containerd[1544]: time="2025-11-24T06:47:12.117815310Z" level=info msg="connecting to shim 0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4" address="unix:///run/containerd/s/c962afe462f0694425aa30213f9697bdde2de21591ef892a556b59974c0aa1b9" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:12.142576 systemd[1]: Started cri-containerd-0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4.scope - libcontainer container 0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4. Nov 24 06:47:12.161493 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:12.164628 systemd-networkd[1449]: calie2c7ae52943: Link UP Nov 24 06:47:12.166592 systemd-networkd[1449]: calie2c7ae52943: Gained carrier Nov 24 06:47:12.180745 containerd[1544]: time="2025-11-24T06:47:12.180661495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8z57g,Uid:de4b2b6f-2e35-454b-b826-35c899986b61,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b816acbc214eaed1b63eab2870ca0162ac934a2d15d85320e81698088d9f0b4\"" Nov 24 06:47:12.185318 containerd[1544]: 2025-11-24 06:47:11.890 [INFO][4291] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--52wxk-eth0 coredns-66bc5c9577- kube-system 598e36cd-b984-4495-bdd1-88d1ae40f5c0 827 0 2025-11-24 06:46:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-52wxk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie2c7ae52943 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-" Nov 24 06:47:12.185318 containerd[1544]: 2025-11-24 06:47:11.890 [INFO][4291] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.185318 containerd[1544]: 2025-11-24 06:47:11.929 [INFO][4356] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" HandleID="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Workload="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:11.929 [INFO][4356] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" HandleID="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Workload="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138870), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-52wxk", "timestamp":"2025-11-24 06:47:11.929345567 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:11.929 [INFO][4356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.049 [INFO][4356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.049 [INFO][4356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.126 [INFO][4356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" host="localhost" Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.132 [INFO][4356] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.136 [INFO][4356] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.138 [INFO][4356] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.140 [INFO][4356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.185563 containerd[1544]: 2025-11-24 06:47:12.140 [INFO][4356] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" host="localhost" Nov 24 06:47:12.185792 containerd[1544]: 2025-11-24 06:47:12.143 [INFO][4356] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a Nov 24 06:47:12.185792 containerd[1544]: 2025-11-24 06:47:12.148 [INFO][4356] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" host="localhost" Nov 24 06:47:12.185792 containerd[1544]: 2025-11-24 06:47:12.154 [INFO][4356] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" host="localhost" Nov 24 06:47:12.185792 containerd[1544]: 2025-11-24 06:47:12.154 [INFO][4356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" host="localhost" Nov 24 06:47:12.185792 containerd[1544]: 2025-11-24 06:47:12.154 [INFO][4356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:12.185792 containerd[1544]: 2025-11-24 06:47:12.154 [INFO][4356] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" HandleID="k8s-pod-network.86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Workload="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.185914 containerd[1544]: 2025-11-24 06:47:12.159 [INFO][4291] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--52wxk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"598e36cd-b984-4495-bdd1-88d1ae40f5c0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-52wxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2c7ae52943", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.185914 containerd[1544]: 2025-11-24 06:47:12.160 [INFO][4291] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.185914 containerd[1544]: 2025-11-24 06:47:12.160 [INFO][4291] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2c7ae52943 ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.185914 containerd[1544]: 2025-11-24 06:47:12.168 [INFO][4291] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.185914 containerd[1544]: 2025-11-24 06:47:12.170 [INFO][4291] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--52wxk-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"598e36cd-b984-4495-bdd1-88d1ae40f5c0", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a", Pod:"coredns-66bc5c9577-52wxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie2c7ae52943", MAC:"4a:4c:d2:87:71:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.185914 containerd[1544]: 2025-11-24 06:47:12.180 [INFO][4291] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" Namespace="kube-system" Pod="coredns-66bc5c9577-52wxk" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--52wxk-eth0" Nov 24 06:47:12.220039 containerd[1544]: time="2025-11-24T06:47:12.219991088Z" level=info msg="connecting to shim 86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a" address="unix:///run/containerd/s/9a3a1fca6011c52a30a269a76f15f6134c80061870ada6b5031057462e9fb7d4" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:12.242583 systemd[1]: Started cri-containerd-86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a.scope - libcontainer container 86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a. Nov 24 06:47:12.257262 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:12.263430 systemd-networkd[1449]: calie52e703de1f: Link UP Nov 24 06:47:12.263682 systemd-networkd[1449]: calie52e703de1f: Gained carrier Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:11.887 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0 calico-apiserver-7785576499- calico-apiserver 86381663-d21f-4c14-bc69-3f80735f20fe 828 0 2025-11-24 06:46:48 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7785576499 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7785576499-wrvhp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie52e703de1f [] [] }} ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:11.887 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:11.930 [INFO][4354] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" HandleID="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Workload="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:11.930 [INFO][4354] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" HandleID="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Workload="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aa9b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7785576499-wrvhp", "timestamp":"2025-11-24 06:47:11.930338868 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:11.930 [INFO][4354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.154 [INFO][4354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.154 [INFO][4354] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.227 [INFO][4354] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.232 [INFO][4354] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.237 [INFO][4354] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.240 [INFO][4354] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.243 [INFO][4354] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.243 [INFO][4354] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.246 [INFO][4354] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.250 [INFO][4354] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.256 [INFO][4354] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.256 [INFO][4354] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" host="localhost" Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.256 [INFO][4354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:12.278874 containerd[1544]: 2025-11-24 06:47:12.256 [INFO][4354] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" HandleID="k8s-pod-network.de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Workload="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.279562 containerd[1544]: 2025-11-24 06:47:12.261 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0", GenerateName:"calico-apiserver-7785576499-", Namespace:"calico-apiserver", SelfLink:"", UID:"86381663-d21f-4c14-bc69-3f80735f20fe", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7785576499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7785576499-wrvhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie52e703de1f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.279562 containerd[1544]: 2025-11-24 06:47:12.261 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.279562 containerd[1544]: 2025-11-24 06:47:12.261 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie52e703de1f ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.279562 containerd[1544]: 2025-11-24 06:47:12.263 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.279562 containerd[1544]: 2025-11-24 06:47:12.263 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0", GenerateName:"calico-apiserver-7785576499-", Namespace:"calico-apiserver", SelfLink:"", UID:"86381663-d21f-4c14-bc69-3f80735f20fe", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7785576499", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec", Pod:"calico-apiserver-7785576499-wrvhp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie52e703de1f", MAC:"42:3f:13:27:fd:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.279562 containerd[1544]: 2025-11-24 06:47:12.275 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" Namespace="calico-apiserver" Pod="calico-apiserver-7785576499-wrvhp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7785576499--wrvhp-eth0" Nov 24 06:47:12.301154 containerd[1544]: time="2025-11-24T06:47:12.301090601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-52wxk,Uid:598e36cd-b984-4495-bdd1-88d1ae40f5c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a\"" Nov 24 06:47:12.304284 containerd[1544]: time="2025-11-24T06:47:12.304240775Z" level=info msg="connecting to shim de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec" address="unix:///run/containerd/s/b40ea588af6106872a2338347c80ead284691ebdb8b7638a7208f129ba1f9bb1" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:12.309237 containerd[1544]: time="2025-11-24T06:47:12.308556425Z" level=info msg="CreateContainer within sandbox \"86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 06:47:12.320368 containerd[1544]: time="2025-11-24T06:47:12.320332811Z" level=info msg="Container c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:47:12.327669 containerd[1544]: time="2025-11-24T06:47:12.327634336Z" level=info msg="CreateContainer within sandbox \"86d7bd1773a06b5a287378572cbebbcc69647ec3e0473dcfc638bd0ab40c7c9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33\"" Nov 24 06:47:12.328268 containerd[1544]: time="2025-11-24T06:47:12.328138356Z" level=info msg="StartContainer for \"c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33\"" Nov 24 06:47:12.329100 containerd[1544]: time="2025-11-24T06:47:12.329054011Z" level=info msg="connecting to shim c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33" address="unix:///run/containerd/s/9a3a1fca6011c52a30a269a76f15f6134c80061870ada6b5031057462e9fb7d4" protocol=ttrpc version=3 Nov 24 06:47:12.329597 systemd[1]: Started cri-containerd-de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec.scope - libcontainer container de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec. Nov 24 06:47:12.350581 systemd[1]: Started cri-containerd-c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33.scope - libcontainer container c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33. Nov 24 06:47:12.354481 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:12.382842 containerd[1544]: time="2025-11-24T06:47:12.382801345Z" level=info msg="StartContainer for \"c615360688fd071de6a79f65c12211de5ac025f4665de1b9c0973ffb66d71f33\" returns successfully" Nov 24 06:47:12.392710 containerd[1544]: time="2025-11-24T06:47:12.392666630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7785576499-wrvhp,Uid:86381663-d21f-4c14-bc69-3f80735f20fe,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"de2cb59581ab9062c5bb131524c2279eaadedbcd16f0e972cfd4d376d763d2ec\"" Nov 24 06:47:12.459886 containerd[1544]: time="2025-11-24T06:47:12.459785545Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:12.461181 containerd[1544]: time="2025-11-24T06:47:12.461107815Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:12.461318 containerd[1544]: time="2025-11-24T06:47:12.461145737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:12.461476 kubelet[2689]: E1124 06:47:12.461424 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:12.461789 kubelet[2689]: E1124 06:47:12.461485 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:12.461789 kubelet[2689]: E1124 06:47:12.461641 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7785576499-vjzwh_calico-apiserver(e17ed833-91af-49ab-901e-293fc1161607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:12.461789 kubelet[2689]: E1124 06:47:12.461674 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:12.462111 containerd[1544]: time="2025-11-24T06:47:12.462031105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:47:12.757285 containerd[1544]: time="2025-11-24T06:47:12.757241299Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:12.758460 containerd[1544]: time="2025-11-24T06:47:12.758412004Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 06:47:12.758543 containerd[1544]: time="2025-11-24T06:47:12.758476806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 06:47:12.758727 kubelet[2689]: E1124 06:47:12.758642 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:12.758727 kubelet[2689]: E1124 06:47:12.758685 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:12.758889 kubelet[2689]: E1124 06:47:12.758862 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:12.759365 containerd[1544]: time="2025-11-24T06:47:12.758991847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:12.792267 containerd[1544]: time="2025-11-24T06:47:12.792204005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768c95b4f7-bql9j,Uid:cff648c4-cfae-4330-83e3-56fc17913402,Namespace:calico-apiserver,Attempt:0,}" Nov 24 06:47:12.911247 systemd-networkd[1449]: cali998ad49916b: Link UP Nov 24 06:47:12.912364 systemd-networkd[1449]: cali998ad49916b: Gained carrier Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.844 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0 calico-apiserver-768c95b4f7- calico-apiserver cff648c4-cfae-4330-83e3-56fc17913402 831 0 2025-11-24 06:46:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:768c95b4f7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-768c95b4f7-bql9j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali998ad49916b [] [] }} ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.846 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.878 [INFO][4647] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" HandleID="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Workload="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.879 [INFO][4647] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" HandleID="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Workload="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139640), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-768c95b4f7-bql9j", "timestamp":"2025-11-24 06:47:12.878866878 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.879 [INFO][4647] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.879 [INFO][4647] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.879 [INFO][4647] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.885 [INFO][4647] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.889 [INFO][4647] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.892 [INFO][4647] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.894 [INFO][4647] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.896 [INFO][4647] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.896 [INFO][4647] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.897 [INFO][4647] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.900 [INFO][4647] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.905 [INFO][4647] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.905 [INFO][4647] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" host="localhost" Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.905 [INFO][4647] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:12.924714 containerd[1544]: 2025-11-24 06:47:12.905 [INFO][4647] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" HandleID="k8s-pod-network.062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Workload="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:12.925538 containerd[1544]: 2025-11-24 06:47:12.908 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0", GenerateName:"calico-apiserver-768c95b4f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"cff648c4-cfae-4330-83e3-56fc17913402", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768c95b4f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-768c95b4f7-bql9j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali998ad49916b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.925538 containerd[1544]: 2025-11-24 06:47:12.909 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:12.925538 containerd[1544]: 2025-11-24 06:47:12.909 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali998ad49916b ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:12.925538 containerd[1544]: 2025-11-24 06:47:12.911 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:12.925538 containerd[1544]: 2025-11-24 06:47:12.911 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0", GenerateName:"calico-apiserver-768c95b4f7-", Namespace:"calico-apiserver", SelfLink:"", UID:"cff648c4-cfae-4330-83e3-56fc17913402", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"768c95b4f7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb", Pod:"calico-apiserver-768c95b4f7-bql9j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali998ad49916b", MAC:"92:d8:5d:0f:bc:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:12.925538 containerd[1544]: 2025-11-24 06:47:12.921 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" Namespace="calico-apiserver" Pod="calico-apiserver-768c95b4f7-bql9j" WorkloadEndpoint="localhost-k8s-calico--apiserver--768c95b4f7--bql9j-eth0" Nov 24 06:47:13.002225 kubelet[2689]: E1124 06:47:13.002165 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:13.004806 kubelet[2689]: I1124 06:47:13.004725 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-52wxk" podStartSLOduration=34.00469291 podStartE2EDuration="34.00469291s" podCreationTimestamp="2025-11-24 06:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:47:12.992093414 +0000 UTC m=+40.286745112" watchObservedRunningTime="2025-11-24 06:47:13.00469291 +0000 UTC m=+40.299344608" Nov 24 06:47:13.033137 containerd[1544]: time="2025-11-24T06:47:13.032990373Z" level=info msg="connecting to shim 062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb" address="unix:///run/containerd/s/2fb94b18006df0f455ed7c5c2a0e340747bd8a211b0e7223791124741e069339" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:13.067563 systemd[1]: Started cri-containerd-062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb.scope - libcontainer container 062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb. Nov 24 06:47:13.079703 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:13.091328 containerd[1544]: time="2025-11-24T06:47:13.091288368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:13.092534 containerd[1544]: time="2025-11-24T06:47:13.092509818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:13.092685 containerd[1544]: time="2025-11-24T06:47:13.092607652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:13.092871 kubelet[2689]: E1124 06:47:13.092832 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:13.092930 kubelet[2689]: E1124 06:47:13.092876 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:13.093058 kubelet[2689]: E1124 06:47:13.093023 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7785576499-wrvhp_calico-apiserver(86381663-d21f-4c14-bc69-3f80735f20fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:13.093090 kubelet[2689]: E1124 06:47:13.093061 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" podUID="86381663-d21f-4c14-bc69-3f80735f20fe" Nov 24 06:47:13.094769 containerd[1544]: time="2025-11-24T06:47:13.094731172Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 06:47:13.115790 containerd[1544]: time="2025-11-24T06:47:13.115740106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-768c95b4f7-bql9j,Uid:cff648c4-cfae-4330-83e3-56fc17913402,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"062c98872469797cda8f1ba1b77f85358444ac944d0baa9557ef9ccbf73290eb\"" Nov 24 06:47:13.423053 containerd[1544]: time="2025-11-24T06:47:13.422927766Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:13.424144 containerd[1544]: time="2025-11-24T06:47:13.424115553Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 06:47:13.424201 containerd[1544]: time="2025-11-24T06:47:13.424188661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 06:47:13.424402 kubelet[2689]: E1124 06:47:13.424354 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:13.424468 kubelet[2689]: E1124 06:47:13.424407 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:13.424593 kubelet[2689]: E1124 06:47:13.424559 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:13.424678 kubelet[2689]: E1124 06:47:13.424616 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:47:13.424722 containerd[1544]: time="2025-11-24T06:47:13.424688863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:13.453686 systemd-networkd[1449]: cali3f4a6c49b71: Gained IPv6LL Nov 24 06:47:13.516602 systemd-networkd[1449]: calia0cb5fe0c5f: Gained IPv6LL Nov 24 06:47:13.708603 systemd-networkd[1449]: calie52e703de1f: Gained IPv6LL Nov 24 06:47:13.758409 containerd[1544]: time="2025-11-24T06:47:13.758336296Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:13.759496 containerd[1544]: time="2025-11-24T06:47:13.759430216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:13.759567 containerd[1544]: time="2025-11-24T06:47:13.759533871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:13.759770 kubelet[2689]: E1124 06:47:13.759718 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:13.759770 kubelet[2689]: E1124 06:47:13.759768 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:13.760119 kubelet[2689]: E1124 06:47:13.759853 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-768c95b4f7-bql9j_calico-apiserver(cff648c4-cfae-4330-83e3-56fc17913402): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:13.760119 kubelet[2689]: E1124 06:47:13.759886 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:13.789664 containerd[1544]: time="2025-11-24T06:47:13.789612229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dw5sv,Uid:74685ae7-a0a3-452c-92e5-934da9ec5504,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:13.792521 containerd[1544]: time="2025-11-24T06:47:13.792473008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2s5k6,Uid:7c1ea070-2521-4273-b2b3-9736eaffd427,Namespace:kube-system,Attempt:0,}" Nov 24 06:47:13.886892 systemd-networkd[1449]: cali59af914b1c7: Link UP Nov 24 06:47:13.888011 systemd-networkd[1449]: cali59af914b1c7: Gained carrier Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.829 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--2s5k6-eth0 coredns-66bc5c9577- kube-system 7c1ea070-2521-4273-b2b3-9736eaffd427 829 0 2025-11-24 06:46:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-2s5k6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59af914b1c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.829 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.852 [INFO][4753] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" HandleID="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Workload="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.852 [INFO][4753] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" HandleID="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Workload="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e950), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-2s5k6", "timestamp":"2025-11-24 06:47:13.852149338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.852 [INFO][4753] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.852 [INFO][4753] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.852 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.858 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.862 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.865 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.866 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.867 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.868 [INFO][4753] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.869 [INFO][4753] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019 Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.873 [INFO][4753] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.880 [INFO][4753] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.880 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" host="localhost" Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.880 [INFO][4753] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:13.900092 containerd[1544]: 2025-11-24 06:47:13.880 [INFO][4753] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" HandleID="k8s-pod-network.9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Workload="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.900794 containerd[1544]: 2025-11-24 06:47:13.883 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2s5k6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c1ea070-2521-4273-b2b3-9736eaffd427", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-2s5k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59af914b1c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:13.900794 containerd[1544]: 2025-11-24 06:47:13.883 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.900794 containerd[1544]: 2025-11-24 06:47:13.883 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59af914b1c7 ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.900794 containerd[1544]: 2025-11-24 06:47:13.888 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.900794 containerd[1544]: 2025-11-24 06:47:13.888 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--2s5k6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7c1ea070-2521-4273-b2b3-9736eaffd427", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019", Pod:"coredns-66bc5c9577-2s5k6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59af914b1c7", MAC:"7a:a3:16:63:89:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:13.900794 containerd[1544]: 2025-11-24 06:47:13.897 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" Namespace="kube-system" Pod="coredns-66bc5c9577-2s5k6" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--2s5k6-eth0" Nov 24 06:47:13.930205 containerd[1544]: time="2025-11-24T06:47:13.930156768Z" level=info msg="connecting to shim 9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019" address="unix:///run/containerd/s/96f2f8d5418c81c4b1b88e9fc978401c2c2a12a33678e08ebec136772a4eaf11" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:13.958633 systemd[1]: Started cri-containerd-9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019.scope - libcontainer container 9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019. Nov 24 06:47:13.972922 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:13.991993 systemd-networkd[1449]: calia8ad5314b56: Link UP Nov 24 06:47:13.992865 systemd-networkd[1449]: calia8ad5314b56: Gained carrier Nov 24 06:47:14.011254 kubelet[2689]: E1124 06:47:14.011222 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" podUID="86381663-d21f-4c14-bc69-3f80735f20fe" Nov 24 06:47:14.011859 kubelet[2689]: E1124 06:47:14.011448 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:14.011859 kubelet[2689]: E1124 06:47:14.011499 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:47:14.011984 kubelet[2689]: E1124 06:47:14.011617 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.825 [INFO][4718] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--dw5sv-eth0 goldmane-7c778bb748- calico-system 74685ae7-a0a3-452c-92e5-934da9ec5504 832 0 2025-11-24 06:46:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-dw5sv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia8ad5314b56 [] [] }} ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.826 [INFO][4718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.856 [INFO][4747] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" HandleID="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Workload="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.857 [INFO][4747] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" HandleID="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Workload="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-dw5sv", "timestamp":"2025-11-24 06:47:13.856979627 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.857 [INFO][4747] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.880 [INFO][4747] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.880 [INFO][4747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.960 [INFO][4747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.965 [INFO][4747] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.969 [INFO][4747] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.970 [INFO][4747] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.972 [INFO][4747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.972 [INFO][4747] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.973 [INFO][4747] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.977 [INFO][4747] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.982 [INFO][4747] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.982 [INFO][4747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" host="localhost" Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.982 [INFO][4747] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:14.012590 containerd[1544]: 2025-11-24 06:47:13.983 [INFO][4747] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" HandleID="k8s-pod-network.f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Workload="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.013070 containerd[1544]: 2025-11-24 06:47:13.988 [INFO][4718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dw5sv-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"74685ae7-a0a3-452c-92e5-934da9ec5504", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-dw5sv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8ad5314b56", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:14.013070 containerd[1544]: 2025-11-24 06:47:13.988 [INFO][4718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.013070 containerd[1544]: 2025-11-24 06:47:13.988 [INFO][4718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8ad5314b56 ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.013070 containerd[1544]: 2025-11-24 06:47:13.993 [INFO][4718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.013070 containerd[1544]: 2025-11-24 06:47:13.993 [INFO][4718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--dw5sv-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"74685ae7-a0a3-452c-92e5-934da9ec5504", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd", Pod:"goldmane-7c778bb748-dw5sv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia8ad5314b56", MAC:"f6:6c:42:69:6e:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:14.013070 containerd[1544]: 2025-11-24 06:47:14.003 [INFO][4718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" Namespace="calico-system" Pod="goldmane-7c778bb748-dw5sv" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--dw5sv-eth0" Nov 24 06:47:14.028816 systemd-networkd[1449]: calie2c7ae52943: Gained IPv6LL Nov 24 06:47:14.033990 containerd[1544]: time="2025-11-24T06:47:14.033957278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2s5k6,Uid:7c1ea070-2521-4273-b2b3-9736eaffd427,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019\"" Nov 24 06:47:14.048190 containerd[1544]: time="2025-11-24T06:47:14.048139549Z" level=info msg="CreateContainer within sandbox \"9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 24 06:47:14.050798 containerd[1544]: time="2025-11-24T06:47:14.050770414Z" level=info msg="connecting to shim f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd" address="unix:///run/containerd/s/240b68c8f4bdea2c0f4805e426a9a4b69361ca677b8b46c1a4ae36563de7597c" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:14.072022 containerd[1544]: time="2025-11-24T06:47:14.071987706Z" level=info msg="Container f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2: CDI devices from CRI Config.CDIDevices: []" Nov 24 06:47:14.080614 containerd[1544]: time="2025-11-24T06:47:14.080560503Z" level=info msg="CreateContainer within sandbox \"9bdb2a64810199d975710e52ae5f4c7c3c4b448b94da9a0aae638820310b7019\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2\"" Nov 24 06:47:14.081446 containerd[1544]: time="2025-11-24T06:47:14.081416706Z" level=info msg="StartContainer for \"f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2\"" Nov 24 06:47:14.082338 containerd[1544]: time="2025-11-24T06:47:14.082291523Z" level=info msg="connecting to shim f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2" address="unix:///run/containerd/s/96f2f8d5418c81c4b1b88e9fc978401c2c2a12a33678e08ebec136772a4eaf11" protocol=ttrpc version=3 Nov 24 06:47:14.094567 systemd[1]: Started cri-containerd-f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd.scope - libcontainer container f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd. Nov 24 06:47:14.106608 systemd[1]: Started cri-containerd-f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2.scope - libcontainer container f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2. Nov 24 06:47:14.122024 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:14.142629 containerd[1544]: time="2025-11-24T06:47:14.142544277Z" level=info msg="StartContainer for \"f9b343dfd2e095ab19c6dfdef5cb7b24481830ef9b4c66231fe85509c1795ef2\" returns successfully" Nov 24 06:47:14.168045 containerd[1544]: time="2025-11-24T06:47:14.168004100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-dw5sv,Uid:74685ae7-a0a3-452c-92e5-934da9ec5504,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7ae55f48834c56fcc2256551c95bde73745600ab36c19407b98a122c04a90bd\"" Nov 24 06:47:14.170854 containerd[1544]: time="2025-11-24T06:47:14.170837146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:47:14.348625 systemd-networkd[1449]: cali998ad49916b: Gained IPv6LL Nov 24 06:47:14.495052 containerd[1544]: time="2025-11-24T06:47:14.495010174Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:14.531815 containerd[1544]: time="2025-11-24T06:47:14.531763257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:47:14.531877 containerd[1544]: time="2025-11-24T06:47:14.531840593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:14.531972 kubelet[2689]: E1124 06:47:14.531932 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:14.532015 kubelet[2689]: E1124 06:47:14.531971 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:14.532062 kubelet[2689]: E1124 06:47:14.532045 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dw5sv_calico-system(74685ae7-a0a3-452c-92e5-934da9ec5504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:14.532091 kubelet[2689]: E1124 06:47:14.532073 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:14.927603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4108703986.mount: Deactivated successfully. Nov 24 06:47:15.010912 kubelet[2689]: E1124 06:47:15.010860 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:15.012424 kubelet[2689]: E1124 06:47:15.012378 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:15.045033 kubelet[2689]: I1124 06:47:15.044977 2689 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2s5k6" podStartSLOduration=36.044958613 podStartE2EDuration="36.044958613s" podCreationTimestamp="2025-11-24 06:46:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 06:47:15.032713612 +0000 UTC m=+42.327365310" watchObservedRunningTime="2025-11-24 06:47:15.044958613 +0000 UTC m=+42.339610311" Nov 24 06:47:15.372587 systemd-networkd[1449]: cali59af914b1c7: Gained IPv6LL Nov 24 06:47:15.789361 containerd[1544]: time="2025-11-24T06:47:15.789303805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596cc5c64b-j6f7z,Uid:d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f,Namespace:calico-system,Attempt:0,}" Nov 24 06:47:15.870909 systemd-networkd[1449]: cali96e1a1f2d3f: Link UP Nov 24 06:47:15.871581 systemd-networkd[1449]: cali96e1a1f2d3f: Gained carrier Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.819 [INFO][4921] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0 calico-kube-controllers-596cc5c64b- calico-system d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f 826 0 2025-11-24 06:46:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596cc5c64b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-596cc5c64b-j6f7z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali96e1a1f2d3f [] [] }} ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.819 [INFO][4921] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.842 [INFO][4937] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" HandleID="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Workload="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.842 [INFO][4937] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" HandleID="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Workload="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-596cc5c64b-j6f7z", "timestamp":"2025-11-24 06:47:15.842494166 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.842 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.842 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.842 [INFO][4937] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.848 [INFO][4937] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.851 [INFO][4937] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.854 [INFO][4937] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.855 [INFO][4937] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.857 [INFO][4937] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.857 [INFO][4937] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.858 [INFO][4937] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.861 [INFO][4937] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.865 [INFO][4937] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.865 [INFO][4937] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" host="localhost" Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.865 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 24 06:47:15.884775 containerd[1544]: 2025-11-24 06:47:15.865 [INFO][4937] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" HandleID="k8s-pod-network.e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Workload="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.885320 containerd[1544]: 2025-11-24 06:47:15.868 [INFO][4921] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0", GenerateName:"calico-kube-controllers-596cc5c64b-", Namespace:"calico-system", SelfLink:"", UID:"d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596cc5c64b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-596cc5c64b-j6f7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96e1a1f2d3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:15.885320 containerd[1544]: 2025-11-24 06:47:15.869 [INFO][4921] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.885320 containerd[1544]: 2025-11-24 06:47:15.869 [INFO][4921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96e1a1f2d3f ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.885320 containerd[1544]: 2025-11-24 06:47:15.871 [INFO][4921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.885320 containerd[1544]: 2025-11-24 06:47:15.872 [INFO][4921] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0", GenerateName:"calico-kube-controllers-596cc5c64b-", Namespace:"calico-system", SelfLink:"", UID:"d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.November, 24, 6, 46, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596cc5c64b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c", Pod:"calico-kube-controllers-596cc5c64b-j6f7z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96e1a1f2d3f", MAC:"92:35:7c:32:db:5b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 24 06:47:15.885320 containerd[1544]: 2025-11-24 06:47:15.881 [INFO][4921] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" Namespace="calico-system" Pod="calico-kube-controllers-596cc5c64b-j6f7z" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--596cc5c64b--j6f7z-eth0" Nov 24 06:47:15.909087 containerd[1544]: time="2025-11-24T06:47:15.909041965Z" level=info msg="connecting to shim e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c" address="unix:///run/containerd/s/45da90feab2e505d869dd1828fac4e264485169be0db018f79b6e6728ea1f689" namespace=k8s.io protocol=ttrpc version=3 Nov 24 06:47:15.931571 systemd[1]: Started cri-containerd-e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c.scope - libcontainer container e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c. Nov 24 06:47:15.943816 systemd-resolved[1456]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 24 06:47:15.949555 systemd-networkd[1449]: calia8ad5314b56: Gained IPv6LL Nov 24 06:47:15.983712 containerd[1544]: time="2025-11-24T06:47:15.983667634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596cc5c64b-j6f7z,Uid:d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e1092784da37977e8f9c4d245eb02e54e69834aaf66b2ddf26a35d59a73c5a3c\"" Nov 24 06:47:15.984909 containerd[1544]: time="2025-11-24T06:47:15.984886911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 06:47:16.013990 kubelet[2689]: E1124 06:47:16.013949 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:16.362489 containerd[1544]: time="2025-11-24T06:47:16.362433798Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:16.363541 containerd[1544]: time="2025-11-24T06:47:16.363501118Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 06:47:16.363594 containerd[1544]: time="2025-11-24T06:47:16.363544830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:16.363726 kubelet[2689]: E1124 06:47:16.363684 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:16.363794 kubelet[2689]: E1124 06:47:16.363729 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:16.363841 kubelet[2689]: E1124 06:47:16.363797 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-596cc5c64b-j6f7z_calico-system(d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:16.363841 kubelet[2689]: E1124 06:47:16.363828 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" podUID="d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f" Nov 24 06:47:16.374306 systemd[1]: Started sshd@8-10.0.0.32:22-10.0.0.1:57238.service - OpenSSH per-connection server daemon (10.0.0.1:57238). Nov 24 06:47:16.437695 sshd[5001]: Accepted publickey for core from 10.0.0.1 port 57238 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:16.439166 sshd-session[5001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:16.443144 systemd-logind[1532]: New session 9 of user core. Nov 24 06:47:16.452568 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 24 06:47:16.568003 sshd[5004]: Connection closed by 10.0.0.1 port 57238 Nov 24 06:47:16.568632 sshd-session[5001]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:16.572116 systemd[1]: sshd@8-10.0.0.32:22-10.0.0.1:57238.service: Deactivated successfully. Nov 24 06:47:16.574123 systemd[1]: session-9.scope: Deactivated successfully. Nov 24 06:47:16.574877 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Nov 24 06:47:16.575958 systemd-logind[1532]: Removed session 9. Nov 24 06:47:17.016377 kubelet[2689]: E1124 06:47:17.016314 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" podUID="d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f" Nov 24 06:47:17.613341 systemd-networkd[1449]: cali96e1a1f2d3f: Gained IPv6LL Nov 24 06:47:21.580427 systemd[1]: Started sshd@9-10.0.0.32:22-10.0.0.1:40084.service - OpenSSH per-connection server daemon (10.0.0.1:40084). Nov 24 06:47:21.630924 sshd[5027]: Accepted publickey for core from 10.0.0.1 port 40084 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:21.632391 sshd-session[5027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:21.636385 systemd-logind[1532]: New session 10 of user core. Nov 24 06:47:21.642557 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 24 06:47:21.752067 sshd[5030]: Connection closed by 10.0.0.1 port 40084 Nov 24 06:47:21.752403 sshd-session[5027]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:21.761144 systemd[1]: sshd@9-10.0.0.32:22-10.0.0.1:40084.service: Deactivated successfully. Nov 24 06:47:21.762928 systemd[1]: session-10.scope: Deactivated successfully. Nov 24 06:47:21.763706 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Nov 24 06:47:21.765983 systemd[1]: Started sshd@10-10.0.0.32:22-10.0.0.1:40090.service - OpenSSH per-connection server daemon (10.0.0.1:40090). Nov 24 06:47:21.766843 systemd-logind[1532]: Removed session 10. Nov 24 06:47:21.815865 sshd[5044]: Accepted publickey for core from 10.0.0.1 port 40090 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:21.817357 sshd-session[5044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:21.822259 systemd-logind[1532]: New session 11 of user core. Nov 24 06:47:21.835561 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 24 06:47:21.980544 sshd[5047]: Connection closed by 10.0.0.1 port 40090 Nov 24 06:47:21.979395 sshd-session[5044]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:21.989188 systemd[1]: sshd@10-10.0.0.32:22-10.0.0.1:40090.service: Deactivated successfully. Nov 24 06:47:21.992512 systemd[1]: session-11.scope: Deactivated successfully. Nov 24 06:47:21.994978 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Nov 24 06:47:21.996762 systemd[1]: Started sshd@11-10.0.0.32:22-10.0.0.1:40104.service - OpenSSH per-connection server daemon (10.0.0.1:40104). Nov 24 06:47:21.998807 systemd-logind[1532]: Removed session 11. Nov 24 06:47:22.042915 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 40104 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:22.044140 sshd-session[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:22.048365 systemd-logind[1532]: New session 12 of user core. Nov 24 06:47:22.062624 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 24 06:47:22.191134 sshd[5061]: Connection closed by 10.0.0.1 port 40104 Nov 24 06:47:22.191428 sshd-session[5058]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:22.197093 systemd[1]: sshd@11-10.0.0.32:22-10.0.0.1:40104.service: Deactivated successfully. Nov 24 06:47:22.199514 systemd[1]: session-12.scope: Deactivated successfully. Nov 24 06:47:22.200352 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Nov 24 06:47:22.201622 systemd-logind[1532]: Removed session 12. Nov 24 06:47:22.788090 containerd[1544]: time="2025-11-24T06:47:22.788047161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:47:23.119347 containerd[1544]: time="2025-11-24T06:47:23.119186431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:23.120601 containerd[1544]: time="2025-11-24T06:47:23.120534479Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:47:23.120601 containerd[1544]: time="2025-11-24T06:47:23.120593921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:47:23.120824 kubelet[2689]: E1124 06:47:23.120771 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:23.121228 kubelet[2689]: E1124 06:47:23.120823 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:23.121228 kubelet[2689]: E1124 06:47:23.120911 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-55b85594bc-ngknk_calico-system(acd6d1b4-5631-405d-9c2a-2cf6826dc7b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:23.121740 containerd[1544]: time="2025-11-24T06:47:23.121693530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:47:23.466902 containerd[1544]: time="2025-11-24T06:47:23.466859486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:23.468206 containerd[1544]: time="2025-11-24T06:47:23.468148792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:47:23.468206 containerd[1544]: time="2025-11-24T06:47:23.468198916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:23.468450 kubelet[2689]: E1124 06:47:23.468402 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:23.468509 kubelet[2689]: E1124 06:47:23.468464 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:23.468572 kubelet[2689]: E1124 06:47:23.468546 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-55b85594bc-ngknk_calico-system(acd6d1b4-5631-405d-9c2a-2cf6826dc7b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:23.468616 kubelet[2689]: E1124 06:47:23.468588 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55b85594bc-ngknk" podUID="acd6d1b4-5631-405d-9c2a-2cf6826dc7b1" Nov 24 06:47:25.787561 containerd[1544]: time="2025-11-24T06:47:25.787491400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:26.115124 containerd[1544]: time="2025-11-24T06:47:26.114959356Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:26.116203 containerd[1544]: time="2025-11-24T06:47:26.116160897Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:26.116267 containerd[1544]: time="2025-11-24T06:47:26.116233964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:26.116460 kubelet[2689]: E1124 06:47:26.116395 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:26.116760 kubelet[2689]: E1124 06:47:26.116464 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:26.116760 kubelet[2689]: E1124 06:47:26.116555 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7785576499-vjzwh_calico-apiserver(e17ed833-91af-49ab-901e-293fc1161607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:26.116760 kubelet[2689]: E1124 06:47:26.116590 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:26.787666 containerd[1544]: time="2025-11-24T06:47:26.787592508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:27.115365 containerd[1544]: time="2025-11-24T06:47:27.115217660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:27.116349 containerd[1544]: time="2025-11-24T06:47:27.116313943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:27.116404 containerd[1544]: time="2025-11-24T06:47:27.116344731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:27.116587 kubelet[2689]: E1124 06:47:27.116548 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:27.116910 kubelet[2689]: E1124 06:47:27.116595 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:27.116910 kubelet[2689]: E1124 06:47:27.116675 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-768c95b4f7-bql9j_calico-apiserver(cff648c4-cfae-4330-83e3-56fc17913402): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:27.116910 kubelet[2689]: E1124 06:47:27.116719 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:27.203341 systemd[1]: Started sshd@12-10.0.0.32:22-10.0.0.1:40120.service - OpenSSH per-connection server daemon (10.0.0.1:40120). Nov 24 06:47:27.253112 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 40120 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:27.254758 sshd-session[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:27.258886 systemd-logind[1532]: New session 13 of user core. Nov 24 06:47:27.268566 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 24 06:47:27.376652 sshd[5083]: Connection closed by 10.0.0.1 port 40120 Nov 24 06:47:27.376875 sshd-session[5080]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:27.380700 systemd[1]: sshd@12-10.0.0.32:22-10.0.0.1:40120.service: Deactivated successfully. Nov 24 06:47:27.382556 systemd[1]: session-13.scope: Deactivated successfully. Nov 24 06:47:27.383255 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Nov 24 06:47:27.384234 systemd-logind[1532]: Removed session 13. Nov 24 06:47:27.787682 containerd[1544]: time="2025-11-24T06:47:27.787351384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:47:28.130816 containerd[1544]: time="2025-11-24T06:47:28.130682399Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:28.137980 containerd[1544]: time="2025-11-24T06:47:28.137898437Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 06:47:28.138039 containerd[1544]: time="2025-11-24T06:47:28.137989328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 06:47:28.138186 kubelet[2689]: E1124 06:47:28.138131 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:28.138702 kubelet[2689]: E1124 06:47:28.138203 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:28.138702 kubelet[2689]: E1124 06:47:28.138275 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:28.139298 containerd[1544]: time="2025-11-24T06:47:28.139063208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 06:47:28.463861 containerd[1544]: time="2025-11-24T06:47:28.463738542Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:28.464901 containerd[1544]: time="2025-11-24T06:47:28.464845195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 06:47:28.464984 containerd[1544]: time="2025-11-24T06:47:28.464861325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 06:47:28.465118 kubelet[2689]: E1124 06:47:28.465079 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:28.465167 kubelet[2689]: E1124 06:47:28.465125 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:28.465229 kubelet[2689]: E1124 06:47:28.465207 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:28.465287 kubelet[2689]: E1124 06:47:28.465251 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:47:28.787008 containerd[1544]: time="2025-11-24T06:47:28.786936947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:29.178760 containerd[1544]: time="2025-11-24T06:47:29.178630801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:29.179779 containerd[1544]: time="2025-11-24T06:47:29.179739887Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:29.179779 containerd[1544]: time="2025-11-24T06:47:29.179771867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:29.179951 kubelet[2689]: E1124 06:47:29.179917 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:29.180289 kubelet[2689]: E1124 06:47:29.179958 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:29.180289 kubelet[2689]: E1124 06:47:29.180175 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7785576499-wrvhp_calico-apiserver(86381663-d21f-4c14-bc69-3f80735f20fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:29.180289 kubelet[2689]: E1124 06:47:29.180224 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" podUID="86381663-d21f-4c14-bc69-3f80735f20fe" Nov 24 06:47:29.180391 containerd[1544]: time="2025-11-24T06:47:29.180223498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:47:29.497280 containerd[1544]: time="2025-11-24T06:47:29.497233994Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:29.498494 containerd[1544]: time="2025-11-24T06:47:29.498409196Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:47:29.498494 containerd[1544]: time="2025-11-24T06:47:29.498504054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:29.498679 kubelet[2689]: E1124 06:47:29.498643 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:29.498717 kubelet[2689]: E1124 06:47:29.498686 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:29.498786 kubelet[2689]: E1124 06:47:29.498761 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dw5sv_calico-system(74685ae7-a0a3-452c-92e5-934da9ec5504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:29.498823 kubelet[2689]: E1124 06:47:29.498797 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:29.787754 containerd[1544]: time="2025-11-24T06:47:29.787626959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 06:47:30.170572 containerd[1544]: time="2025-11-24T06:47:30.170455593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:30.171677 containerd[1544]: time="2025-11-24T06:47:30.171614784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 06:47:30.171677 containerd[1544]: time="2025-11-24T06:47:30.171672993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:30.171896 kubelet[2689]: E1124 06:47:30.171854 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:30.171966 kubelet[2689]: E1124 06:47:30.171907 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:30.172050 kubelet[2689]: E1124 06:47:30.172005 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-596cc5c64b-j6f7z_calico-system(d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:30.172185 kubelet[2689]: E1124 06:47:30.172067 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" podUID="d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f" Nov 24 06:47:32.394087 systemd[1]: Started sshd@13-10.0.0.32:22-10.0.0.1:38846.service - OpenSSH per-connection server daemon (10.0.0.1:38846). Nov 24 06:47:32.447073 sshd[5103]: Accepted publickey for core from 10.0.0.1 port 38846 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:32.448486 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:32.452837 systemd-logind[1532]: New session 14 of user core. Nov 24 06:47:32.466594 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 24 06:47:32.584470 sshd[5106]: Connection closed by 10.0.0.1 port 38846 Nov 24 06:47:32.584801 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:32.589207 systemd[1]: sshd@13-10.0.0.32:22-10.0.0.1:38846.service: Deactivated successfully. Nov 24 06:47:32.591186 systemd[1]: session-14.scope: Deactivated successfully. Nov 24 06:47:32.591994 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Nov 24 06:47:32.593178 systemd-logind[1532]: Removed session 14. Nov 24 06:47:37.601858 systemd[1]: Started sshd@14-10.0.0.32:22-10.0.0.1:38862.service - OpenSSH per-connection server daemon (10.0.0.1:38862). Nov 24 06:47:37.661571 sshd[5124]: Accepted publickey for core from 10.0.0.1 port 38862 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:37.663028 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:37.667100 systemd-logind[1532]: New session 15 of user core. Nov 24 06:47:37.677636 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 24 06:47:37.789366 kubelet[2689]: E1124 06:47:37.789302 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55b85594bc-ngknk" podUID="acd6d1b4-5631-405d-9c2a-2cf6826dc7b1" Nov 24 06:47:37.803530 sshd[5127]: Connection closed by 10.0.0.1 port 38862 Nov 24 06:47:37.803926 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:37.808589 systemd[1]: sshd@14-10.0.0.32:22-10.0.0.1:38862.service: Deactivated successfully. Nov 24 06:47:37.810758 systemd[1]: session-15.scope: Deactivated successfully. Nov 24 06:47:37.811630 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Nov 24 06:47:37.812945 systemd-logind[1532]: Removed session 15. Nov 24 06:47:38.787010 kubelet[2689]: E1124 06:47:38.786969 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:41.786625 kubelet[2689]: E1124 06:47:41.786575 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:41.786625 kubelet[2689]: E1124 06:47:41.786573 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" podUID="d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f" Nov 24 06:47:41.786625 kubelet[2689]: E1124 06:47:41.786619 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:42.788563 kubelet[2689]: E1124 06:47:42.788487 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:47:42.826229 systemd[1]: Started sshd@15-10.0.0.32:22-10.0.0.1:41376.service - OpenSSH per-connection server daemon (10.0.0.1:41376). Nov 24 06:47:42.895488 sshd[5170]: Accepted publickey for core from 10.0.0.1 port 41376 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:42.897585 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:42.904126 systemd-logind[1532]: New session 16 of user core. Nov 24 06:47:42.913718 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 24 06:47:43.050876 sshd[5173]: Connection closed by 10.0.0.1 port 41376 Nov 24 06:47:43.051962 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:43.059926 systemd[1]: sshd@15-10.0.0.32:22-10.0.0.1:41376.service: Deactivated successfully. Nov 24 06:47:43.063409 systemd[1]: session-16.scope: Deactivated successfully. Nov 24 06:47:43.065921 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Nov 24 06:47:43.071490 systemd[1]: Started sshd@16-10.0.0.32:22-10.0.0.1:41390.service - OpenSSH per-connection server daemon (10.0.0.1:41390). Nov 24 06:47:43.073639 systemd-logind[1532]: Removed session 16. Nov 24 06:47:43.131536 sshd[5186]: Accepted publickey for core from 10.0.0.1 port 41390 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:43.132290 sshd-session[5186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:43.137352 systemd-logind[1532]: New session 17 of user core. Nov 24 06:47:43.144558 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 24 06:47:43.333608 sshd[5189]: Connection closed by 10.0.0.1 port 41390 Nov 24 06:47:43.334409 sshd-session[5186]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:43.342301 systemd[1]: sshd@16-10.0.0.32:22-10.0.0.1:41390.service: Deactivated successfully. Nov 24 06:47:43.344241 systemd[1]: session-17.scope: Deactivated successfully. Nov 24 06:47:43.344921 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Nov 24 06:47:43.348018 systemd[1]: Started sshd@17-10.0.0.32:22-10.0.0.1:41396.service - OpenSSH per-connection server daemon (10.0.0.1:41396). Nov 24 06:47:43.349548 systemd-logind[1532]: Removed session 17. Nov 24 06:47:43.403987 sshd[5200]: Accepted publickey for core from 10.0.0.1 port 41396 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:43.405510 sshd-session[5200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:43.409853 systemd-logind[1532]: New session 18 of user core. Nov 24 06:47:43.421589 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 24 06:47:43.924376 sshd[5203]: Connection closed by 10.0.0.1 port 41396 Nov 24 06:47:43.927389 sshd-session[5200]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:43.937701 systemd[1]: sshd@17-10.0.0.32:22-10.0.0.1:41396.service: Deactivated successfully. Nov 24 06:47:43.939978 systemd[1]: session-18.scope: Deactivated successfully. Nov 24 06:47:43.941421 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Nov 24 06:47:43.946121 systemd[1]: Started sshd@18-10.0.0.32:22-10.0.0.1:41404.service - OpenSSH per-connection server daemon (10.0.0.1:41404). Nov 24 06:47:43.947212 systemd-logind[1532]: Removed session 18. Nov 24 06:47:44.000306 sshd[5221]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:44.001897 sshd-session[5221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:44.006304 systemd-logind[1532]: New session 19 of user core. Nov 24 06:47:44.013639 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 24 06:47:44.243948 sshd[5224]: Connection closed by 10.0.0.1 port 41404 Nov 24 06:47:44.245817 sshd-session[5221]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:44.259255 systemd[1]: sshd@18-10.0.0.32:22-10.0.0.1:41404.service: Deactivated successfully. Nov 24 06:47:44.261062 systemd[1]: session-19.scope: Deactivated successfully. Nov 24 06:47:44.265563 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Nov 24 06:47:44.271477 systemd[1]: Started sshd@19-10.0.0.32:22-10.0.0.1:41418.service - OpenSSH per-connection server daemon (10.0.0.1:41418). Nov 24 06:47:44.273119 systemd-logind[1532]: Removed session 19. Nov 24 06:47:44.323186 sshd[5237]: Accepted publickey for core from 10.0.0.1 port 41418 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:44.324835 sshd-session[5237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:44.329270 systemd-logind[1532]: New session 20 of user core. Nov 24 06:47:44.335586 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 24 06:47:44.450191 sshd[5240]: Connection closed by 10.0.0.1 port 41418 Nov 24 06:47:44.450569 sshd-session[5237]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:44.455679 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Nov 24 06:47:44.456509 systemd[1]: sshd@19-10.0.0.32:22-10.0.0.1:41418.service: Deactivated successfully. Nov 24 06:47:44.458553 systemd[1]: session-20.scope: Deactivated successfully. Nov 24 06:47:44.460919 systemd-logind[1532]: Removed session 20. Nov 24 06:47:44.787410 kubelet[2689]: E1124 06:47:44.787013 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" podUID="86381663-d21f-4c14-bc69-3f80735f20fe" Nov 24 06:47:49.463417 systemd[1]: Started sshd@20-10.0.0.32:22-10.0.0.1:56300.service - OpenSSH per-connection server daemon (10.0.0.1:56300). Nov 24 06:47:49.513951 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 56300 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:49.515814 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:49.520076 systemd-logind[1532]: New session 21 of user core. Nov 24 06:47:49.524581 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 24 06:47:49.653341 sshd[5268]: Connection closed by 10.0.0.1 port 56300 Nov 24 06:47:49.654789 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:49.659882 systemd[1]: sshd@20-10.0.0.32:22-10.0.0.1:56300.service: Deactivated successfully. Nov 24 06:47:49.662034 systemd[1]: session-21.scope: Deactivated successfully. Nov 24 06:47:49.663480 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Nov 24 06:47:49.664728 systemd-logind[1532]: Removed session 21. Nov 24 06:47:51.787364 containerd[1544]: time="2025-11-24T06:47:51.787318791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 24 06:47:52.152106 containerd[1544]: time="2025-11-24T06:47:52.151978240Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:52.153177 containerd[1544]: time="2025-11-24T06:47:52.153123339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 24 06:47:52.153177 containerd[1544]: time="2025-11-24T06:47:52.153200977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 24 06:47:52.153386 kubelet[2689]: E1124 06:47:52.153312 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:52.153386 kubelet[2689]: E1124 06:47:52.153354 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 24 06:47:52.153877 kubelet[2689]: E1124 06:47:52.153470 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-55b85594bc-ngknk_calico-system(acd6d1b4-5631-405d-9c2a-2cf6826dc7b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:52.154460 containerd[1544]: time="2025-11-24T06:47:52.154394609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 24 06:47:52.504847 containerd[1544]: time="2025-11-24T06:47:52.504805263Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:52.505923 containerd[1544]: time="2025-11-24T06:47:52.505900145Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 24 06:47:52.505972 containerd[1544]: time="2025-11-24T06:47:52.505965540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:52.506202 kubelet[2689]: E1124 06:47:52.506132 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:52.506202 kubelet[2689]: E1124 06:47:52.506192 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 24 06:47:52.506301 kubelet[2689]: E1124 06:47:52.506265 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-55b85594bc-ngknk_calico-system(acd6d1b4-5631-405d-9c2a-2cf6826dc7b1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:52.506352 kubelet[2689]: E1124 06:47:52.506308 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55b85594bc-ngknk" podUID="acd6d1b4-5631-405d-9c2a-2cf6826dc7b1" Nov 24 06:47:52.789088 containerd[1544]: time="2025-11-24T06:47:52.788668379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 24 06:47:53.112216 containerd[1544]: time="2025-11-24T06:47:53.112083113Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:53.113532 containerd[1544]: time="2025-11-24T06:47:53.113469151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 24 06:47:53.113571 containerd[1544]: time="2025-11-24T06:47:53.113534866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 24 06:47:53.113720 kubelet[2689]: E1124 06:47:53.113670 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:53.113770 kubelet[2689]: E1124 06:47:53.113719 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 24 06:47:53.113814 kubelet[2689]: E1124 06:47:53.113793 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-596cc5c64b-j6f7z_calico-system(d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:53.113872 kubelet[2689]: E1124 06:47:53.113825 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596cc5c64b-j6f7z" podUID="d7e9defc-dc9d-4ff9-ae1d-ab935f2e0e9f" Nov 24 06:47:53.788146 containerd[1544]: time="2025-11-24T06:47:53.787866861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 24 06:47:54.139036 containerd[1544]: time="2025-11-24T06:47:54.138901950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:54.140103 containerd[1544]: time="2025-11-24T06:47:54.140072085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 24 06:47:54.140176 containerd[1544]: time="2025-11-24T06:47:54.140123734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 24 06:47:54.140294 kubelet[2689]: E1124 06:47:54.140253 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:54.140565 kubelet[2689]: E1124 06:47:54.140298 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 24 06:47:54.140592 containerd[1544]: time="2025-11-24T06:47:54.140562682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 24 06:47:54.141600 kubelet[2689]: E1124 06:47:54.140630 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:54.464776 containerd[1544]: time="2025-11-24T06:47:54.464451420Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:54.465876 containerd[1544]: time="2025-11-24T06:47:54.465776762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 24 06:47:54.465876 containerd[1544]: time="2025-11-24T06:47:54.465828320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:54.466160 kubelet[2689]: E1124 06:47:54.466107 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:54.466160 kubelet[2689]: E1124 06:47:54.466161 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 24 06:47:54.466478 kubelet[2689]: E1124 06:47:54.466363 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-dw5sv_calico-system(74685ae7-a0a3-452c-92e5-934da9ec5504): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:54.466478 kubelet[2689]: E1124 06:47:54.466401 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-dw5sv" podUID="74685ae7-a0a3-452c-92e5-934da9ec5504" Nov 24 06:47:54.466618 containerd[1544]: time="2025-11-24T06:47:54.466581759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:54.679053 systemd[1]: Started sshd@21-10.0.0.32:22-10.0.0.1:56310.service - OpenSSH per-connection server daemon (10.0.0.1:56310). Nov 24 06:47:54.732251 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 56310 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:54.733939 sshd-session[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:54.738395 systemd-logind[1532]: New session 22 of user core. Nov 24 06:47:54.743632 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 24 06:47:54.807732 containerd[1544]: time="2025-11-24T06:47:54.807673842Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:54.808827 containerd[1544]: time="2025-11-24T06:47:54.808766709Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:54.809077 kubelet[2689]: E1124 06:47:54.809013 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:54.809127 containerd[1544]: time="2025-11-24T06:47:54.808807918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:54.809198 kubelet[2689]: E1124 06:47:54.809182 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:54.809479 kubelet[2689]: E1124 06:47:54.809454 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-768c95b4f7-bql9j_calico-apiserver(cff648c4-cfae-4330-83e3-56fc17913402): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:54.809628 containerd[1544]: time="2025-11-24T06:47:54.809580072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 24 06:47:54.809788 kubelet[2689]: E1124 06:47:54.809678 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-768c95b4f7-bql9j" podUID="cff648c4-cfae-4330-83e3-56fc17913402" Nov 24 06:47:54.884992 sshd[5285]: Connection closed by 10.0.0.1 port 56310 Nov 24 06:47:54.888753 sshd-session[5282]: pam_unix(sshd:session): session closed for user core Nov 24 06:47:54.894986 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Nov 24 06:47:54.898881 systemd[1]: sshd@21-10.0.0.32:22-10.0.0.1:56310.service: Deactivated successfully. Nov 24 06:47:54.901378 systemd[1]: session-22.scope: Deactivated successfully. Nov 24 06:47:54.907256 systemd-logind[1532]: Removed session 22. Nov 24 06:47:55.121077 containerd[1544]: time="2025-11-24T06:47:55.120953667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:55.122089 containerd[1544]: time="2025-11-24T06:47:55.121996047Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 24 06:47:55.122089 containerd[1544]: time="2025-11-24T06:47:55.122033108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 24 06:47:55.122271 kubelet[2689]: E1124 06:47:55.122197 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:55.122271 kubelet[2689]: E1124 06:47:55.122233 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 24 06:47:55.122540 kubelet[2689]: E1124 06:47:55.122416 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-8z57g_calico-system(de4b2b6f-2e35-454b-b826-35c899986b61): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:55.122540 kubelet[2689]: E1124 06:47:55.122487 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-8z57g" podUID="de4b2b6f-2e35-454b-b826-35c899986b61" Nov 24 06:47:55.122889 containerd[1544]: time="2025-11-24T06:47:55.122524696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:55.486613 containerd[1544]: time="2025-11-24T06:47:55.486563381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:55.487795 containerd[1544]: time="2025-11-24T06:47:55.487742171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:55.487906 containerd[1544]: time="2025-11-24T06:47:55.487829878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:55.488071 kubelet[2689]: E1124 06:47:55.488013 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:55.488313 kubelet[2689]: E1124 06:47:55.488076 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:55.488313 kubelet[2689]: E1124 06:47:55.488162 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7785576499-vjzwh_calico-apiserver(e17ed833-91af-49ab-901e-293fc1161607): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:55.488313 kubelet[2689]: E1124 06:47:55.488204 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-vjzwh" podUID="e17ed833-91af-49ab-901e-293fc1161607" Nov 24 06:47:57.787609 containerd[1544]: time="2025-11-24T06:47:57.787571556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 24 06:47:58.120591 containerd[1544]: time="2025-11-24T06:47:58.120422072Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 24 06:47:58.122862 containerd[1544]: time="2025-11-24T06:47:58.122809394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 24 06:47:58.122862 containerd[1544]: time="2025-11-24T06:47:58.122869569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 24 06:47:58.123052 kubelet[2689]: E1124 06:47:58.122953 2689 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:58.123052 kubelet[2689]: E1124 06:47:58.122986 2689 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 24 06:47:58.123417 kubelet[2689]: E1124 06:47:58.123054 2689 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7785576499-wrvhp_calico-apiserver(86381663-d21f-4c14-bc69-3f80735f20fe): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 24 06:47:58.123417 kubelet[2689]: E1124 06:47:58.123085 2689 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7785576499-wrvhp" podUID="86381663-d21f-4c14-bc69-3f80735f20fe" Nov 24 06:47:59.898005 systemd[1]: Started sshd@22-10.0.0.32:22-10.0.0.1:53588.service - OpenSSH per-connection server daemon (10.0.0.1:53588). Nov 24 06:47:59.952650 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 53588 ssh2: RSA SHA256:TIi8/bC2awVbEZ93VxTeez+OSWVov1y1XEW0M7EonxM Nov 24 06:47:59.954036 sshd-session[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 24 06:47:59.957996 systemd-logind[1532]: New session 23 of user core. Nov 24 06:47:59.967583 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 24 06:48:00.081206 sshd[5303]: Connection closed by 10.0.0.1 port 53588 Nov 24 06:48:00.083013 sshd-session[5300]: pam_unix(sshd:session): session closed for user core Nov 24 06:48:00.086730 systemd[1]: sshd@22-10.0.0.32:22-10.0.0.1:53588.service: Deactivated successfully. Nov 24 06:48:00.088782 systemd[1]: session-23.scope: Deactivated successfully. Nov 24 06:48:00.089609 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Nov 24 06:48:00.090821 systemd-logind[1532]: Removed session 23.