Oct 13 05:55:15.874725 kernel: Linux version 6.12.51-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Oct 12 22:37:12 -00 2025 Oct 13 05:55:15.874758 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:55:15.874767 kernel: BIOS-provided physical RAM map: Oct 13 05:55:15.874774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 13 05:55:15.874781 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 13 05:55:15.874787 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 13 05:55:15.874803 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Oct 13 05:55:15.874810 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Oct 13 05:55:15.874819 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 13 05:55:15.874826 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 13 05:55:15.874835 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 13 05:55:15.874842 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 13 05:55:15.874848 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 13 05:55:15.874855 kernel: NX (Execute Disable) protection: active Oct 13 05:55:15.874865 kernel: APIC: Static calls initialized Oct 13 05:55:15.874872 kernel: SMBIOS 2.8 present. Oct 13 05:55:15.874880 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Oct 13 05:55:15.874887 kernel: DMI: Memory slots populated: 1/1 Oct 13 05:55:15.874894 kernel: Hypervisor detected: KVM Oct 13 05:55:15.874908 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 13 05:55:15.874923 kernel: kvm-clock: using sched offset of 4645578700 cycles Oct 13 05:55:15.874933 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 13 05:55:15.874941 kernel: tsc: Detected 2794.750 MHz processor Oct 13 05:55:15.874948 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 13 05:55:15.874958 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 13 05:55:15.874965 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Oct 13 05:55:15.874973 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 13 05:55:15.874981 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 13 05:55:15.874988 kernel: Using GB pages for direct mapping Oct 13 05:55:15.874995 kernel: ACPI: Early table checksum verification disabled Oct 13 05:55:15.875015 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Oct 13 05:55:15.875031 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875050 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875064 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875072 kernel: ACPI: FACS 0x000000009CFE0000 000040 Oct 13 05:55:15.875079 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875086 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875094 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875101 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 13 05:55:15.875108 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Oct 13 05:55:15.875121 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Oct 13 05:55:15.875129 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Oct 13 05:55:15.875136 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Oct 13 05:55:15.875144 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Oct 13 05:55:15.875151 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Oct 13 05:55:15.875159 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Oct 13 05:55:15.875168 kernel: No NUMA configuration found Oct 13 05:55:15.875176 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Oct 13 05:55:15.875183 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Oct 13 05:55:15.875191 kernel: Zone ranges: Oct 13 05:55:15.875198 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 13 05:55:15.875206 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Oct 13 05:55:15.875213 kernel: Normal empty Oct 13 05:55:15.875221 kernel: Device empty Oct 13 05:55:15.875228 kernel: Movable zone start for each node Oct 13 05:55:15.875236 kernel: Early memory node ranges Oct 13 05:55:15.875246 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 13 05:55:15.875253 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Oct 13 05:55:15.875260 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Oct 13 05:55:15.875268 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 13 05:55:15.875275 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 13 05:55:15.875283 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Oct 13 05:55:15.875291 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 13 05:55:15.875298 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 13 05:55:15.875306 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 13 05:55:15.875316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 13 05:55:15.875323 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 13 05:55:15.875331 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 13 05:55:15.875338 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 13 05:55:15.875346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 13 05:55:15.875354 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 13 05:55:15.875361 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 13 05:55:15.875369 kernel: TSC deadline timer available Oct 13 05:55:15.875376 kernel: CPU topo: Max. logical packages: 1 Oct 13 05:55:15.875386 kernel: CPU topo: Max. logical dies: 1 Oct 13 05:55:15.875393 kernel: CPU topo: Max. dies per package: 1 Oct 13 05:55:15.875400 kernel: CPU topo: Max. threads per core: 1 Oct 13 05:55:15.875408 kernel: CPU topo: Num. cores per package: 4 Oct 13 05:55:15.875415 kernel: CPU topo: Num. threads per package: 4 Oct 13 05:55:15.875422 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 13 05:55:15.875430 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 13 05:55:15.875438 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 13 05:55:15.875445 kernel: kvm-guest: setup PV sched yield Oct 13 05:55:15.875453 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 13 05:55:15.875463 kernel: Booting paravirtualized kernel on KVM Oct 13 05:55:15.875471 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 13 05:55:15.875479 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 13 05:55:15.875486 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 13 05:55:15.875494 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 13 05:55:15.875501 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 13 05:55:15.875516 kernel: kvm-guest: PV spinlocks enabled Oct 13 05:55:15.875524 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 13 05:55:15.875533 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:55:15.875543 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 13 05:55:15.875551 kernel: random: crng init done Oct 13 05:55:15.875559 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 13 05:55:15.875567 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 13 05:55:15.875574 kernel: Fallback order for Node 0: 0 Oct 13 05:55:15.875582 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Oct 13 05:55:15.875589 kernel: Policy zone: DMA32 Oct 13 05:55:15.875597 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 13 05:55:15.875606 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 13 05:55:15.875614 kernel: ftrace: allocating 40139 entries in 157 pages Oct 13 05:55:15.875621 kernel: ftrace: allocated 157 pages with 5 groups Oct 13 05:55:15.875629 kernel: Dynamic Preempt: voluntary Oct 13 05:55:15.875636 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 13 05:55:15.875654 kernel: rcu: RCU event tracing is enabled. Oct 13 05:55:15.875662 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 13 05:55:15.875670 kernel: Trampoline variant of Tasks RCU enabled. Oct 13 05:55:15.875677 kernel: Rude variant of Tasks RCU enabled. Oct 13 05:55:15.875687 kernel: Tracing variant of Tasks RCU enabled. Oct 13 05:55:15.875694 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 13 05:55:15.875702 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 13 05:55:15.875710 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:55:15.875717 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:55:15.875725 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 13 05:55:15.875733 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 13 05:55:15.875747 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 13 05:55:15.875764 kernel: Console: colour VGA+ 80x25 Oct 13 05:55:15.875772 kernel: printk: legacy console [ttyS0] enabled Oct 13 05:55:15.875780 kernel: ACPI: Core revision 20240827 Oct 13 05:55:15.875788 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 13 05:55:15.875807 kernel: APIC: Switch to symmetric I/O mode setup Oct 13 05:55:15.875944 kernel: x2apic enabled Oct 13 05:55:15.875953 kernel: APIC: Switched APIC routing to: physical x2apic Oct 13 05:55:15.875961 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 13 05:55:15.875969 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 13 05:55:15.875984 kernel: kvm-guest: setup PV IPIs Oct 13 05:55:15.875992 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 13 05:55:15.876000 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:55:15.876020 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Oct 13 05:55:15.876028 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 13 05:55:15.876036 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 13 05:55:15.876044 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 13 05:55:15.876053 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 13 05:55:15.876063 kernel: Spectre V2 : Mitigation: Retpolines Oct 13 05:55:15.876071 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 13 05:55:15.876086 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 13 05:55:15.876096 kernel: active return thunk: retbleed_return_thunk Oct 13 05:55:15.876111 kernel: RETBleed: Mitigation: untrained return thunk Oct 13 05:55:15.876124 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 13 05:55:15.876142 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 13 05:55:15.876159 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 13 05:55:15.876168 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 13 05:55:15.876179 kernel: active return thunk: srso_return_thunk Oct 13 05:55:15.876187 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 13 05:55:15.876195 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 13 05:55:15.876203 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 13 05:55:15.876211 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 13 05:55:15.876219 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 13 05:55:15.876227 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 13 05:55:15.876235 kernel: Freeing SMP alternatives memory: 32K Oct 13 05:55:15.876252 kernel: pid_max: default: 32768 minimum: 301 Oct 13 05:55:15.876265 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 13 05:55:15.876276 kernel: landlock: Up and running. Oct 13 05:55:15.876286 kernel: SELinux: Initializing. Oct 13 05:55:15.876294 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:55:15.876302 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 13 05:55:15.876310 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 13 05:55:15.876318 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 13 05:55:15.876326 kernel: ... version: 0 Oct 13 05:55:15.876337 kernel: ... bit width: 48 Oct 13 05:55:15.876345 kernel: ... generic registers: 6 Oct 13 05:55:15.876353 kernel: ... value mask: 0000ffffffffffff Oct 13 05:55:15.876361 kernel: ... max period: 00007fffffffffff Oct 13 05:55:15.876369 kernel: ... fixed-purpose events: 0 Oct 13 05:55:15.876376 kernel: ... event mask: 000000000000003f Oct 13 05:55:15.876395 kernel: signal: max sigframe size: 1776 Oct 13 05:55:15.876405 kernel: rcu: Hierarchical SRCU implementation. Oct 13 05:55:15.876413 kernel: rcu: Max phase no-delay instances is 400. Oct 13 05:55:15.876421 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 13 05:55:15.876432 kernel: smp: Bringing up secondary CPUs ... Oct 13 05:55:15.876440 kernel: smpboot: x86: Booting SMP configuration: Oct 13 05:55:15.876451 kernel: .... node #0, CPUs: #1 #2 #3 Oct 13 05:55:15.876459 kernel: smp: Brought up 1 node, 4 CPUs Oct 13 05:55:15.876470 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Oct 13 05:55:15.876481 kernel: Memory: 2428912K/2571752K available (14336K kernel code, 2443K rwdata, 10000K rodata, 54096K init, 2852K bss, 136904K reserved, 0K cma-reserved) Oct 13 05:55:15.876490 kernel: devtmpfs: initialized Oct 13 05:55:15.876507 kernel: x86/mm: Memory block size: 128MB Oct 13 05:55:15.876528 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 13 05:55:15.876541 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 13 05:55:15.876563 kernel: pinctrl core: initialized pinctrl subsystem Oct 13 05:55:15.876580 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 13 05:55:15.876588 kernel: audit: initializing netlink subsys (disabled) Oct 13 05:55:15.876596 kernel: audit: type=2000 audit(1760334912.638:1): state=initialized audit_enabled=0 res=1 Oct 13 05:55:15.876604 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 13 05:55:15.876612 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 13 05:55:15.876620 kernel: cpuidle: using governor menu Oct 13 05:55:15.876629 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 13 05:55:15.876645 kernel: dca service started, version 1.12.1 Oct 13 05:55:15.876659 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Oct 13 05:55:15.876669 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 13 05:55:15.876679 kernel: PCI: Using configuration type 1 for base access Oct 13 05:55:15.876689 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 13 05:55:15.876699 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 13 05:55:15.876708 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 13 05:55:15.876718 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 13 05:55:15.876731 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 13 05:55:15.876741 kernel: ACPI: Added _OSI(Module Device) Oct 13 05:55:15.876752 kernel: ACPI: Added _OSI(Processor Device) Oct 13 05:55:15.876762 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 13 05:55:15.876773 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 13 05:55:15.876783 kernel: ACPI: Interpreter enabled Oct 13 05:55:15.876791 kernel: ACPI: PM: (supports S0 S3 S5) Oct 13 05:55:15.876798 kernel: ACPI: Using IOAPIC for interrupt routing Oct 13 05:55:15.876817 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 13 05:55:15.876825 kernel: PCI: Using E820 reservations for host bridge windows Oct 13 05:55:15.876844 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 13 05:55:15.876852 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 13 05:55:15.877157 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 13 05:55:15.877299 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 13 05:55:15.877435 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 13 05:55:15.877451 kernel: PCI host bridge to bus 0000:00 Oct 13 05:55:15.877613 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 13 05:55:15.877731 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 13 05:55:15.877858 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 13 05:55:15.880204 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Oct 13 05:55:15.880364 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 13 05:55:15.880478 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Oct 13 05:55:15.880608 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 13 05:55:15.880782 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 13 05:55:15.880922 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 13 05:55:15.881076 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Oct 13 05:55:15.881196 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Oct 13 05:55:15.881324 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Oct 13 05:55:15.881444 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 13 05:55:15.881710 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 13 05:55:15.881839 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Oct 13 05:55:15.881960 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Oct 13 05:55:15.882174 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Oct 13 05:55:15.882320 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 13 05:55:15.882443 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Oct 13 05:55:15.882632 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Oct 13 05:55:15.882755 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Oct 13 05:55:15.882899 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 13 05:55:15.883040 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Oct 13 05:55:15.883162 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Oct 13 05:55:15.883280 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Oct 13 05:55:15.883399 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Oct 13 05:55:15.883730 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 13 05:55:15.883858 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 13 05:55:15.883994 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 13 05:55:15.884133 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Oct 13 05:55:15.884252 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Oct 13 05:55:15.884392 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 13 05:55:15.884525 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Oct 13 05:55:15.884536 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 13 05:55:15.884550 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 13 05:55:15.884558 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 13 05:55:15.884566 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 13 05:55:15.884574 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 13 05:55:15.884583 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 13 05:55:15.884591 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 13 05:55:15.884598 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 13 05:55:15.884606 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 13 05:55:15.884614 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 13 05:55:15.884624 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 13 05:55:15.884632 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 13 05:55:15.884640 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 13 05:55:15.884648 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 13 05:55:15.884657 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 13 05:55:15.884665 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 13 05:55:15.884673 kernel: iommu: Default domain type: Translated Oct 13 05:55:15.884681 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 13 05:55:15.884689 kernel: PCI: Using ACPI for IRQ routing Oct 13 05:55:15.884699 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 13 05:55:15.884708 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 13 05:55:15.884716 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Oct 13 05:55:15.884850 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 13 05:55:15.884971 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 13 05:55:15.885703 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 13 05:55:15.885721 kernel: vgaarb: loaded Oct 13 05:55:15.885730 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 13 05:55:15.885743 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 13 05:55:15.885752 kernel: clocksource: Switched to clocksource kvm-clock Oct 13 05:55:15.885760 kernel: VFS: Disk quotas dquot_6.6.0 Oct 13 05:55:15.885769 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 13 05:55:15.885777 kernel: pnp: PnP ACPI init Oct 13 05:55:15.885934 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 13 05:55:15.885948 kernel: pnp: PnP ACPI: found 6 devices Oct 13 05:55:15.885957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 13 05:55:15.885969 kernel: NET: Registered PF_INET protocol family Oct 13 05:55:15.885977 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 13 05:55:15.885986 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 13 05:55:15.885995 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 13 05:55:15.886142 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 13 05:55:15.886152 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 13 05:55:15.886160 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 13 05:55:15.886168 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:55:15.886177 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 13 05:55:15.886190 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 13 05:55:15.886198 kernel: NET: Registered PF_XDP protocol family Oct 13 05:55:15.886457 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 13 05:55:15.886581 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 13 05:55:15.886690 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 13 05:55:15.886799 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Oct 13 05:55:15.886909 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 13 05:55:15.887035 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Oct 13 05:55:15.887051 kernel: PCI: CLS 0 bytes, default 64 Oct 13 05:55:15.887060 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848e100549, max_idle_ns: 440795215505 ns Oct 13 05:55:15.887068 kernel: Initialise system trusted keyrings Oct 13 05:55:15.887076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 13 05:55:15.887085 kernel: Key type asymmetric registered Oct 13 05:55:15.887093 kernel: Asymmetric key parser 'x509' registered Oct 13 05:55:15.887101 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 13 05:55:15.887110 kernel: io scheduler mq-deadline registered Oct 13 05:55:15.887118 kernel: io scheduler kyber registered Oct 13 05:55:15.887127 kernel: io scheduler bfq registered Oct 13 05:55:15.887138 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 13 05:55:15.887147 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 13 05:55:15.887156 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 13 05:55:15.887164 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 13 05:55:15.887172 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 13 05:55:15.887180 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 13 05:55:15.887188 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 13 05:55:15.887196 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 13 05:55:15.887204 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 13 05:55:15.887352 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 13 05:55:15.887366 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 13 05:55:15.887483 kernel: rtc_cmos 00:04: registered as rtc0 Oct 13 05:55:15.887620 kernel: rtc_cmos 00:04: setting system clock to 2025-10-13T05:55:15 UTC (1760334915) Oct 13 05:55:15.887747 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Oct 13 05:55:15.887761 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 13 05:55:15.887770 kernel: NET: Registered PF_INET6 protocol family Oct 13 05:55:15.887782 kernel: Segment Routing with IPv6 Oct 13 05:55:15.887790 kernel: In-situ OAM (IOAM) with IPv6 Oct 13 05:55:15.887798 kernel: NET: Registered PF_PACKET protocol family Oct 13 05:55:15.887807 kernel: Key type dns_resolver registered Oct 13 05:55:15.887815 kernel: IPI shorthand broadcast: enabled Oct 13 05:55:15.887823 kernel: sched_clock: Marking stable (3218001592, 203779446)->(3553649832, -131868794) Oct 13 05:55:15.887831 kernel: registered taskstats version 1 Oct 13 05:55:15.887839 kernel: Loading compiled-in X.509 certificates Oct 13 05:55:15.887847 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.51-flatcar: d8dbf4abead15098249886d373d42a3af4f50ccd' Oct 13 05:55:15.887858 kernel: Demotion targets for Node 0: null Oct 13 05:55:15.887866 kernel: Key type .fscrypt registered Oct 13 05:55:15.887874 kernel: Key type fscrypt-provisioning registered Oct 13 05:55:15.887882 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 13 05:55:15.887890 kernel: ima: Allocated hash algorithm: sha1 Oct 13 05:55:15.887898 kernel: ima: No architecture policies found Oct 13 05:55:15.887906 kernel: clk: Disabling unused clocks Oct 13 05:55:15.887921 kernel: Warning: unable to open an initial console. Oct 13 05:55:15.887929 kernel: Freeing unused kernel image (initmem) memory: 54096K Oct 13 05:55:15.887940 kernel: Write protecting the kernel read-only data: 24576k Oct 13 05:55:15.887948 kernel: Freeing unused kernel image (rodata/data gap) memory: 240K Oct 13 05:55:15.887956 kernel: Run /init as init process Oct 13 05:55:15.887964 kernel: with arguments: Oct 13 05:55:15.887973 kernel: /init Oct 13 05:55:15.887981 kernel: with environment: Oct 13 05:55:15.887989 kernel: HOME=/ Oct 13 05:55:15.887997 kernel: TERM=linux Oct 13 05:55:15.888020 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 13 05:55:15.888032 systemd[1]: Successfully made /usr/ read-only. Oct 13 05:55:15.888056 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:55:15.888069 systemd[1]: Detected virtualization kvm. Oct 13 05:55:15.888077 systemd[1]: Detected architecture x86-64. Oct 13 05:55:15.888086 systemd[1]: Running in initrd. Oct 13 05:55:15.888096 systemd[1]: No hostname configured, using default hostname. Oct 13 05:55:15.888106 systemd[1]: Hostname set to . Oct 13 05:55:15.888114 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:55:15.888123 systemd[1]: Queued start job for default target initrd.target. Oct 13 05:55:15.888131 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:55:15.888140 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:55:15.888149 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 13 05:55:15.888158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:55:15.888169 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 13 05:55:15.888179 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 13 05:55:15.888189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 13 05:55:15.888198 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 13 05:55:15.888207 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:55:15.888216 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:55:15.888224 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:55:15.888235 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:55:15.888244 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:55:15.888253 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:55:15.888261 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:55:15.888270 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:55:15.888279 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 13 05:55:15.888288 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 13 05:55:15.888297 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:55:15.888308 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:55:15.888318 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:55:15.888327 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:55:15.888336 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 13 05:55:15.888345 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:55:15.888356 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 13 05:55:15.888367 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 13 05:55:15.888376 systemd[1]: Starting systemd-fsck-usr.service... Oct 13 05:55:15.888385 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:55:15.888394 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:55:15.888402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:15.888411 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 13 05:55:15.888423 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:55:15.888464 systemd-journald[219]: Collecting audit messages is disabled. Oct 13 05:55:15.888491 systemd[1]: Finished systemd-fsck-usr.service. Oct 13 05:55:15.888500 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:55:15.888517 systemd-journald[219]: Journal started Oct 13 05:55:15.888539 systemd-journald[219]: Runtime Journal (/run/log/journal/96c1af87ddd14eb38b0f8af437088dc5) is 6M, max 48.6M, 42.5M free. Oct 13 05:55:15.890026 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:55:15.876881 systemd-modules-load[220]: Inserted module 'overlay' Oct 13 05:55:15.895088 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:55:15.896281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:55:15.904143 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:55:15.973402 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 13 05:55:15.973434 kernel: Bridge firewalling registered Oct 13 05:55:15.913711 systemd-modules-load[220]: Inserted module 'br_netfilter' Oct 13 05:55:15.976869 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:55:15.991163 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:15.993811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 13 05:55:15.996527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:55:16.006268 systemd-tmpfiles[236]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 13 05:55:16.008956 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:55:16.011933 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:55:16.013878 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:55:16.017132 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:55:16.032253 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:55:16.037477 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 13 05:55:16.058447 dracut-cmdline[264]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a48d469b0deb49c328e6faf6cf366b11952d47f2d24963c866a0ea8221fb0039 Oct 13 05:55:16.080793 systemd-resolved[256]: Positive Trust Anchors: Oct 13 05:55:16.080826 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:55:16.080865 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:55:16.084857 systemd-resolved[256]: Defaulting to hostname 'linux'. Oct 13 05:55:16.086840 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:55:16.096076 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:55:16.200059 kernel: SCSI subsystem initialized Oct 13 05:55:16.212040 kernel: Loading iSCSI transport class v2.0-870. Oct 13 05:55:16.226053 kernel: iscsi: registered transport (tcp) Oct 13 05:55:16.252040 kernel: iscsi: registered transport (qla4xxx) Oct 13 05:55:16.252074 kernel: QLogic iSCSI HBA Driver Oct 13 05:55:16.276388 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:55:16.302962 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:55:16.306223 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:55:16.366677 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 13 05:55:16.369727 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 13 05:55:16.441039 kernel: raid6: avx2x4 gen() 26257 MB/s Oct 13 05:55:16.458045 kernel: raid6: avx2x2 gen() 24229 MB/s Oct 13 05:55:16.475779 kernel: raid6: avx2x1 gen() 22510 MB/s Oct 13 05:55:16.475808 kernel: raid6: using algorithm avx2x4 gen() 26257 MB/s Oct 13 05:55:16.493767 kernel: raid6: .... xor() 8473 MB/s, rmw enabled Oct 13 05:55:16.493820 kernel: raid6: using avx2x2 recovery algorithm Oct 13 05:55:16.515035 kernel: xor: automatically using best checksumming function avx Oct 13 05:55:16.847065 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 13 05:55:16.855798 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:55:16.859715 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:55:16.889257 systemd-udevd[473]: Using default interface naming scheme 'v255'. Oct 13 05:55:16.895042 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:55:16.896463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 13 05:55:16.921394 dracut-pre-trigger[478]: rd.md=0: removing MD RAID activation Oct 13 05:55:16.952589 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:55:16.957730 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:55:17.124496 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:55:17.130577 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 13 05:55:17.229396 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 13 05:55:17.229447 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 13 05:55:17.251842 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 13 05:55:17.256500 kernel: cryptd: max_cpu_qlen set to 1000 Oct 13 05:55:17.256551 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 13 05:55:17.261111 kernel: GPT:9289727 != 19775487 Oct 13 05:55:17.261146 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 13 05:55:17.261158 kernel: GPT:9289727 != 19775487 Oct 13 05:55:17.261167 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 13 05:55:17.261178 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:17.261189 kernel: libata version 3.00 loaded. Oct 13 05:55:17.265551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:55:17.265675 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:17.271364 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:17.276740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:17.281120 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:55:17.292054 kernel: AES CTR mode by8 optimization enabled Oct 13 05:55:17.294975 kernel: ahci 0000:00:1f.2: version 3.0 Oct 13 05:55:17.295219 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 13 05:55:17.304891 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 13 05:55:17.305159 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 13 05:55:17.305300 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 13 05:55:17.314040 kernel: scsi host0: ahci Oct 13 05:55:17.315029 kernel: scsi host1: ahci Oct 13 05:55:17.317033 kernel: scsi host2: ahci Oct 13 05:55:17.317236 kernel: scsi host3: ahci Oct 13 05:55:17.317516 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 13 05:55:17.319319 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 13 05:55:17.333177 kernel: scsi host4: ahci Oct 13 05:55:17.336027 kernel: scsi host5: ahci Oct 13 05:55:17.336252 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Oct 13 05:55:17.336269 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Oct 13 05:55:17.336280 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Oct 13 05:55:17.336290 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Oct 13 05:55:17.336300 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Oct 13 05:55:17.336314 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Oct 13 05:55:17.337780 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 13 05:55:17.418817 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:55:17.419787 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:17.444226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 13 05:55:17.446055 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 13 05:55:17.479353 disk-uuid[635]: Primary Header is updated. Oct 13 05:55:17.479353 disk-uuid[635]: Secondary Entries is updated. Oct 13 05:55:17.479353 disk-uuid[635]: Secondary Header is updated. Oct 13 05:55:17.485037 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:17.491042 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:17.648661 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:17.648736 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:17.648747 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:17.649033 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 13 05:55:17.652043 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:17.652079 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:55:17.653218 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 13 05:55:17.653234 kernel: ata3.00: applying bridge limits Oct 13 05:55:17.656041 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 13 05:55:17.656062 kernel: ata3.00: LPM support broken, forcing max_power Oct 13 05:55:17.657168 kernel: ata3.00: configured for UDMA/100 Oct 13 05:55:17.658037 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 13 05:55:17.703729 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 13 05:55:17.704149 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 13 05:55:17.732091 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 13 05:55:18.145750 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 13 05:55:18.149818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:55:18.154073 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:55:18.158420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:55:18.163401 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 13 05:55:18.191106 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:55:18.491655 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 13 05:55:18.491766 disk-uuid[636]: The operation has completed successfully. Oct 13 05:55:18.526313 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 13 05:55:18.526461 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 13 05:55:18.562862 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 13 05:55:18.590587 sh[665]: Success Oct 13 05:55:18.609705 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 13 05:55:18.609736 kernel: device-mapper: uevent: version 1.0.3 Oct 13 05:55:18.611365 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 13 05:55:18.621026 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Oct 13 05:55:18.650051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 13 05:55:18.652643 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 13 05:55:18.666920 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 13 05:55:18.673046 kernel: BTRFS: device fsid c8746500-26f5-4ec1-9da8-aef51ec7db92 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (677) Oct 13 05:55:18.676325 kernel: BTRFS info (device dm-0): first mount of filesystem c8746500-26f5-4ec1-9da8-aef51ec7db92 Oct 13 05:55:18.676355 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:18.682378 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 13 05:55:18.682438 kernel: BTRFS info (device dm-0): enabling free space tree Oct 13 05:55:18.683844 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 13 05:55:18.687391 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:55:18.688479 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 13 05:55:18.689463 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 13 05:55:18.697551 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 13 05:55:18.718071 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (710) Oct 13 05:55:18.721292 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:18.721321 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:18.726266 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:55:18.726324 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:55:18.733048 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:18.734351 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 13 05:55:18.737790 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 13 05:55:18.857509 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:55:18.864083 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:55:18.977817 ignition[759]: Ignition 2.22.0 Oct 13 05:55:18.977835 ignition[759]: Stage: fetch-offline Oct 13 05:55:18.977886 ignition[759]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:18.977896 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:18.978032 ignition[759]: parsed url from cmdline: "" Oct 13 05:55:18.978036 ignition[759]: no config URL provided Oct 13 05:55:18.978044 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Oct 13 05:55:18.978053 ignition[759]: no config at "/usr/lib/ignition/user.ign" Oct 13 05:55:18.986126 systemd-networkd[847]: lo: Link UP Oct 13 05:55:18.978077 ignition[759]: op(1): [started] loading QEMU firmware config module Oct 13 05:55:18.986130 systemd-networkd[847]: lo: Gained carrier Oct 13 05:55:18.978083 ignition[759]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 13 05:55:18.987819 systemd-networkd[847]: Enumeration completed Oct 13 05:55:18.987891 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:55:18.996624 ignition[759]: op(1): [finished] loading QEMU firmware config module Oct 13 05:55:18.988613 systemd[1]: Reached target network.target - Network. Oct 13 05:55:18.989600 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:18.989607 systemd-networkd[847]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:55:18.990270 systemd-networkd[847]: eth0: Link UP Oct 13 05:55:18.990798 systemd-networkd[847]: eth0: Gained carrier Oct 13 05:55:18.990814 systemd-networkd[847]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:19.003070 systemd-networkd[847]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:55:19.090944 ignition[759]: parsing config with SHA512: 7b5837dc25c7714aab62710593d86b82bf30d2609857da7653acf78269aa0adbc2da088ba24399b4ff7984338b549670f39ae95fe0cecb3fcb79599ceba64fef Oct 13 05:55:19.099343 unknown[759]: fetched base config from "system" Oct 13 05:55:19.099364 unknown[759]: fetched user config from "qemu" Oct 13 05:55:19.100526 ignition[759]: fetch-offline: fetch-offline passed Oct 13 05:55:19.101282 systemd-resolved[256]: Detected conflict on linux IN A 10.0.0.145 Oct 13 05:55:19.100643 ignition[759]: Ignition finished successfully Oct 13 05:55:19.101299 systemd-resolved[256]: Hostname conflict, changing published hostname from 'linux' to 'linux8'. Oct 13 05:55:19.111020 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:55:19.114894 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 13 05:55:19.115925 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 13 05:55:19.230108 ignition[860]: Ignition 2.22.0 Oct 13 05:55:19.230125 ignition[860]: Stage: kargs Oct 13 05:55:19.230314 ignition[860]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:19.230328 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:19.231234 ignition[860]: kargs: kargs passed Oct 13 05:55:19.231287 ignition[860]: Ignition finished successfully Oct 13 05:55:19.237266 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 13 05:55:19.239032 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 13 05:55:19.274826 ignition[869]: Ignition 2.22.0 Oct 13 05:55:19.274841 ignition[869]: Stage: disks Oct 13 05:55:19.274999 ignition[869]: no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:19.275024 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:19.275798 ignition[869]: disks: disks passed Oct 13 05:55:19.275844 ignition[869]: Ignition finished successfully Oct 13 05:55:19.287695 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 13 05:55:19.291883 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 13 05:55:19.292619 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 13 05:55:19.299875 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:55:19.300612 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:55:19.301416 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:55:19.310959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 13 05:55:19.336911 systemd-fsck[879]: ROOT: clean, 15/553520 files, 52789/553472 blocks Oct 13 05:55:19.345255 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 13 05:55:19.351220 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 13 05:55:19.474049 kernel: EXT4-fs (vda9): mounted filesystem 8b520359-9763-45f3-b7f7-db1e9fbc640d r/w with ordered data mode. Quota mode: none. Oct 13 05:55:19.475415 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 13 05:55:19.478297 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 13 05:55:19.483048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:55:19.484781 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 13 05:55:19.486592 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 13 05:55:19.486633 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 13 05:55:19.486658 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:55:19.504328 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 13 05:55:19.508653 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 13 05:55:19.513911 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (887) Oct 13 05:55:19.517305 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:19.517331 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:19.521252 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:55:19.521307 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:55:19.524021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:55:19.558754 initrd-setup-root[912]: cut: /sysroot/etc/passwd: No such file or directory Oct 13 05:55:19.567944 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory Oct 13 05:55:19.572123 initrd-setup-root[926]: cut: /sysroot/etc/shadow: No such file or directory Oct 13 05:55:19.577480 initrd-setup-root[933]: cut: /sysroot/etc/gshadow: No such file or directory Oct 13 05:55:19.684231 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 13 05:55:19.687156 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 13 05:55:19.690696 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 13 05:55:19.709522 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 13 05:55:19.712474 kernel: BTRFS info (device vda6): last unmount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:19.731221 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 13 05:55:19.749060 ignition[1002]: INFO : Ignition 2.22.0 Oct 13 05:55:19.749060 ignition[1002]: INFO : Stage: mount Oct 13 05:55:19.751854 ignition[1002]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:19.751854 ignition[1002]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:19.751854 ignition[1002]: INFO : mount: mount passed Oct 13 05:55:19.751854 ignition[1002]: INFO : Ignition finished successfully Oct 13 05:55:19.760942 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 13 05:55:19.764640 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 13 05:55:19.791116 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 13 05:55:19.828041 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1014) Oct 13 05:55:19.831684 kernel: BTRFS info (device vda6): first mount of filesystem 1cd10441-4b32-40b7-b370-b928e4bc90dd Oct 13 05:55:19.831712 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 13 05:55:19.836024 kernel: BTRFS info (device vda6): turning on async discard Oct 13 05:55:19.836088 kernel: BTRFS info (device vda6): enabling free space tree Oct 13 05:55:19.838115 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 13 05:55:19.895509 ignition[1031]: INFO : Ignition 2.22.0 Oct 13 05:55:19.895509 ignition[1031]: INFO : Stage: files Oct 13 05:55:19.898590 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:19.898590 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:19.898590 ignition[1031]: DEBUG : files: compiled without relabeling support, skipping Oct 13 05:55:19.898590 ignition[1031]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 13 05:55:19.898590 ignition[1031]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 13 05:55:19.909363 ignition[1031]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 13 05:55:19.909363 ignition[1031]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 13 05:55:19.909363 ignition[1031]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 13 05:55:19.909363 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:55:19.909363 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 13 05:55:19.904524 unknown[1031]: wrote ssh authorized keys file for user: core Oct 13 05:55:19.948109 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 13 05:55:20.002119 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 13 05:55:20.002119 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:55:20.024963 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 13 05:55:20.063996 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:55:20.067314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 13 05:55:20.067314 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:55:20.187195 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:55:20.191382 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:55:20.191382 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 13 05:55:20.598653 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 13 05:55:20.687199 systemd-networkd[847]: eth0: Gained IPv6LL Oct 13 05:55:21.046674 ignition[1031]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 13 05:55:21.046674 ignition[1031]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 13 05:55:21.054191 ignition[1031]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 13 05:55:21.080662 ignition[1031]: INFO : files: files passed Oct 13 05:55:21.080662 ignition[1031]: INFO : Ignition finished successfully Oct 13 05:55:21.083953 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 13 05:55:21.090159 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 13 05:55:21.093961 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 13 05:55:21.125688 initrd-setup-root-after-ignition[1059]: grep: /sysroot/oem/oem-release: No such file or directory Oct 13 05:55:21.105634 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 13 05:55:21.130185 initrd-setup-root-after-ignition[1061]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:55:21.130185 initrd-setup-root-after-ignition[1061]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:55:21.105831 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 13 05:55:21.137191 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 13 05:55:21.114208 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:55:21.116722 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 13 05:55:21.119745 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 13 05:55:21.190610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 13 05:55:21.190747 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 13 05:55:21.193190 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 13 05:55:21.197723 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 13 05:55:21.200909 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 13 05:55:21.204843 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 13 05:55:21.233350 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:55:21.236843 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 13 05:55:21.266718 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:55:21.267580 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:55:21.271319 systemd[1]: Stopped target timers.target - Timer Units. Oct 13 05:55:21.271959 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 13 05:55:21.272123 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 13 05:55:21.281699 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 13 05:55:21.282901 systemd[1]: Stopped target basic.target - Basic System. Oct 13 05:55:21.287195 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 13 05:55:21.287731 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 13 05:55:21.293458 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 13 05:55:21.296752 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 13 05:55:21.300471 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 13 05:55:21.303617 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 13 05:55:21.306808 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 13 05:55:21.310627 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 13 05:55:21.313771 systemd[1]: Stopped target swap.target - Swaps. Oct 13 05:55:21.316886 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 13 05:55:21.317056 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 13 05:55:21.322042 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:55:21.322886 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:55:21.327463 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 13 05:55:21.331123 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:55:21.332413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 13 05:55:21.332533 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 13 05:55:21.341056 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 13 05:55:21.341195 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 13 05:55:21.342060 systemd[1]: Stopped target paths.target - Path Units. Oct 13 05:55:21.347095 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 13 05:55:21.353118 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:55:21.353916 systemd[1]: Stopped target slices.target - Slice Units. Oct 13 05:55:21.358851 systemd[1]: Stopped target sockets.target - Socket Units. Oct 13 05:55:21.359629 systemd[1]: iscsid.socket: Deactivated successfully. Oct 13 05:55:21.359730 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 13 05:55:21.362459 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 13 05:55:21.362544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 13 05:55:21.363030 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 13 05:55:21.363152 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 13 05:55:21.368095 systemd[1]: ignition-files.service: Deactivated successfully. Oct 13 05:55:21.368201 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 13 05:55:21.375434 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 13 05:55:21.377288 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 13 05:55:21.382709 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 13 05:55:21.382949 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:55:21.389518 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 13 05:55:21.389663 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 13 05:55:21.398114 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 13 05:55:21.398260 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 13 05:55:21.416680 ignition[1086]: INFO : Ignition 2.22.0 Oct 13 05:55:21.416680 ignition[1086]: INFO : Stage: umount Oct 13 05:55:21.419668 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 13 05:55:21.419668 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 13 05:55:21.419668 ignition[1086]: INFO : umount: umount passed Oct 13 05:55:21.419668 ignition[1086]: INFO : Ignition finished successfully Oct 13 05:55:21.424867 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 13 05:55:21.425060 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 13 05:55:21.427062 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 13 05:55:21.428828 systemd[1]: Stopped target network.target - Network. Oct 13 05:55:21.429677 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 13 05:55:21.429760 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 13 05:55:21.434529 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 13 05:55:21.434621 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 13 05:55:21.438756 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 13 05:55:21.438851 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 13 05:55:21.443296 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 13 05:55:21.443397 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 13 05:55:21.444157 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 13 05:55:21.451080 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 13 05:55:21.467521 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 13 05:55:21.467674 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 13 05:55:21.473824 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 13 05:55:21.474072 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 13 05:55:21.474202 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 13 05:55:21.479835 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 13 05:55:21.480713 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 13 05:55:21.481794 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 13 05:55:21.481840 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:55:21.487908 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 13 05:55:21.491413 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 13 05:55:21.491494 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 13 05:55:21.494540 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 13 05:55:21.494600 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:55:21.501127 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 13 05:55:21.501182 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 13 05:55:21.502034 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 13 05:55:21.502094 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:55:21.567780 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:55:21.572638 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 13 05:55:21.572719 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:55:21.584737 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 13 05:55:21.584899 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 13 05:55:21.598066 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 13 05:55:21.598238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 13 05:55:21.603109 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 13 05:55:21.603296 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:55:21.605510 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 13 05:55:21.605567 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 13 05:55:21.609895 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 13 05:55:21.609936 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:55:21.613198 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 13 05:55:21.613258 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 13 05:55:21.619272 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 13 05:55:21.619331 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 13 05:55:21.624063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 13 05:55:21.624117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 13 05:55:21.629316 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 13 05:55:21.629393 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 13 05:55:21.631248 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 13 05:55:21.634945 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 13 05:55:21.635019 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:55:21.642126 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 13 05:55:21.642203 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:55:21.647704 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 13 05:55:21.647757 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:55:21.653906 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 13 05:55:21.653955 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:55:21.654825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 13 05:55:21.654868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:21.667841 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Oct 13 05:55:21.667911 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Oct 13 05:55:21.667957 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 13 05:55:21.668021 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 13 05:55:21.668436 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 13 05:55:21.668556 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 13 05:55:21.671339 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 13 05:55:21.682185 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 13 05:55:21.711898 systemd[1]: Switching root. Oct 13 05:55:21.757922 systemd-journald[219]: Journal stopped Oct 13 05:55:23.166640 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Oct 13 05:55:23.166708 kernel: SELinux: policy capability network_peer_controls=1 Oct 13 05:55:23.166730 kernel: SELinux: policy capability open_perms=1 Oct 13 05:55:23.166741 kernel: SELinux: policy capability extended_socket_class=1 Oct 13 05:55:23.166759 kernel: SELinux: policy capability always_check_network=0 Oct 13 05:55:23.166775 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 13 05:55:23.166786 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 13 05:55:23.166803 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 13 05:55:23.166814 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 13 05:55:23.166825 kernel: SELinux: policy capability userspace_initial_context=0 Oct 13 05:55:23.166839 kernel: audit: type=1403 audit(1760334922.178:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 13 05:55:23.166852 systemd[1]: Successfully loaded SELinux policy in 70.056ms. Oct 13 05:55:23.166872 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.469ms. Oct 13 05:55:23.166885 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 13 05:55:23.166898 systemd[1]: Detected virtualization kvm. Oct 13 05:55:23.166911 systemd[1]: Detected architecture x86-64. Oct 13 05:55:23.166922 systemd[1]: Detected first boot. Oct 13 05:55:23.166934 systemd[1]: Initializing machine ID from VM UUID. Oct 13 05:55:23.166946 zram_generator::config[1132]: No configuration found. Oct 13 05:55:23.166961 kernel: Guest personality initialized and is inactive Oct 13 05:55:23.166973 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 13 05:55:23.166984 kernel: Initialized host personality Oct 13 05:55:23.166995 kernel: NET: Registered PF_VSOCK protocol family Oct 13 05:55:23.167021 systemd[1]: Populated /etc with preset unit settings. Oct 13 05:55:23.167034 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 13 05:55:23.167046 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 13 05:55:23.167061 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 13 05:55:23.167073 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 13 05:55:23.167088 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 13 05:55:23.167101 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 13 05:55:23.167113 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 13 05:55:23.167125 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 13 05:55:23.167137 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 13 05:55:23.167149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 13 05:55:23.167162 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 13 05:55:23.167174 systemd[1]: Created slice user.slice - User and Session Slice. Oct 13 05:55:23.167188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 13 05:55:23.167201 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 13 05:55:23.167213 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 13 05:55:23.167225 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 13 05:55:23.167238 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 13 05:55:23.167250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 13 05:55:23.167263 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 13 05:55:23.167275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 13 05:55:23.167289 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 13 05:55:23.167302 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 13 05:55:23.167314 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 13 05:55:23.167328 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 13 05:55:23.167347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 13 05:55:23.167361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 13 05:55:23.167372 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 13 05:55:23.167384 systemd[1]: Reached target slices.target - Slice Units. Oct 13 05:55:23.167396 systemd[1]: Reached target swap.target - Swaps. Oct 13 05:55:23.167410 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 13 05:55:23.167422 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 13 05:55:23.167435 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 13 05:55:23.167447 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 13 05:55:23.167458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 13 05:55:23.167470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 13 05:55:23.167482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 13 05:55:23.167494 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 13 05:55:23.167507 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 13 05:55:23.167521 systemd[1]: Mounting media.mount - External Media Directory... Oct 13 05:55:23.167533 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:23.167545 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 13 05:55:23.167557 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 13 05:55:23.167568 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 13 05:55:23.167581 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 13 05:55:23.167594 systemd[1]: Reached target machines.target - Containers. Oct 13 05:55:23.167607 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 13 05:55:23.167621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:55:23.167634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 13 05:55:23.167646 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 13 05:55:23.167658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:55:23.167671 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:55:23.167683 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:55:23.167695 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 13 05:55:23.167706 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:55:23.167719 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 13 05:55:23.167733 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 13 05:55:23.167745 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 13 05:55:23.167757 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 13 05:55:23.167769 systemd[1]: Stopped systemd-fsck-usr.service. Oct 13 05:55:23.167781 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:55:23.167794 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 13 05:55:23.167806 kernel: loop: module loaded Oct 13 05:55:23.167817 kernel: fuse: init (API version 7.41) Oct 13 05:55:23.167831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 13 05:55:23.167843 kernel: ACPI: bus type drm_connector registered Oct 13 05:55:23.167857 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 13 05:55:23.167875 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 13 05:55:23.167887 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 13 05:55:23.167899 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 13 05:55:23.167913 systemd[1]: verity-setup.service: Deactivated successfully. Oct 13 05:55:23.167925 systemd[1]: Stopped verity-setup.service. Oct 13 05:55:23.167938 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:23.167971 systemd-journald[1207]: Collecting audit messages is disabled. Oct 13 05:55:23.168000 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 13 05:55:23.168027 systemd-journald[1207]: Journal started Oct 13 05:55:23.168051 systemd-journald[1207]: Runtime Journal (/run/log/journal/96c1af87ddd14eb38b0f8af437088dc5) is 6M, max 48.6M, 42.5M free. Oct 13 05:55:22.747135 systemd[1]: Queued start job for default target multi-user.target. Oct 13 05:55:22.767382 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 13 05:55:22.767894 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 13 05:55:23.170259 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 13 05:55:23.173177 systemd[1]: Started systemd-journald.service - Journal Service. Oct 13 05:55:23.176002 systemd[1]: Mounted media.mount - External Media Directory. Oct 13 05:55:23.177689 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 13 05:55:23.179499 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 13 05:55:23.181362 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 13 05:55:23.183406 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 13 05:55:23.185591 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 13 05:55:23.187894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 13 05:55:23.188136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 13 05:55:23.190677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:55:23.190900 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:55:23.193365 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:55:23.193578 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:55:23.195679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:55:23.195895 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:55:23.198481 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 13 05:55:23.198699 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 13 05:55:23.200831 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:55:23.201058 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:55:23.203207 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 13 05:55:23.205505 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 13 05:55:23.207973 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 13 05:55:23.210457 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 13 05:55:23.227709 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 13 05:55:23.231263 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 13 05:55:23.236179 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 13 05:55:23.238250 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 13 05:55:23.238298 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 13 05:55:23.241356 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 13 05:55:23.248135 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 13 05:55:23.250287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:55:23.251598 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 13 05:55:23.255169 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 13 05:55:23.258103 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:55:23.259578 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 13 05:55:23.263687 systemd-journald[1207]: Time spent on flushing to /var/log/journal/96c1af87ddd14eb38b0f8af437088dc5 is 27.959ms for 984 entries. Oct 13 05:55:23.263687 systemd-journald[1207]: System Journal (/var/log/journal/96c1af87ddd14eb38b0f8af437088dc5) is 8M, max 195.6M, 187.6M free. Oct 13 05:55:23.363090 systemd-journald[1207]: Received client request to flush runtime journal. Oct 13 05:55:23.363157 kernel: loop0: detected capacity change from 0 to 128016 Oct 13 05:55:23.263132 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:55:23.265207 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 13 05:55:23.281189 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 13 05:55:23.313821 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 13 05:55:23.318183 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 13 05:55:23.320539 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 13 05:55:23.323742 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 13 05:55:23.324732 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 13 05:55:23.328056 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 13 05:55:23.337355 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 13 05:55:23.346599 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 13 05:55:23.367672 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 13 05:55:23.380054 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 13 05:55:23.380281 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Oct 13 05:55:23.380297 systemd-tmpfiles[1253]: ACLs are not supported, ignoring. Oct 13 05:55:23.390175 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 13 05:55:23.393304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 13 05:55:23.401439 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 13 05:55:23.403829 kernel: loop1: detected capacity change from 0 to 110984 Oct 13 05:55:23.436151 kernel: loop2: detected capacity change from 0 to 219144 Oct 13 05:55:23.451621 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 13 05:55:23.457194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 13 05:55:23.468195 kernel: loop3: detected capacity change from 0 to 128016 Oct 13 05:55:23.482033 kernel: loop4: detected capacity change from 0 to 110984 Oct 13 05:55:23.492449 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Oct 13 05:55:23.492469 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Oct 13 05:55:23.498087 kernel: loop5: detected capacity change from 0 to 219144 Oct 13 05:55:23.535664 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 13 05:55:23.539876 (sd-merge)[1275]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 13 05:55:23.540468 (sd-merge)[1275]: Merged extensions into '/usr'. Oct 13 05:55:23.545609 systemd[1]: Reload requested from client PID 1251 ('systemd-sysext') (unit systemd-sysext.service)... Oct 13 05:55:23.545626 systemd[1]: Reloading... Oct 13 05:55:23.646037 zram_generator::config[1303]: No configuration found. Oct 13 05:55:23.870076 ldconfig[1246]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 13 05:55:23.913947 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 13 05:55:23.914284 systemd[1]: Reloading finished in 368 ms. Oct 13 05:55:23.974251 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 13 05:55:24.011206 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 13 05:55:24.026718 systemd[1]: Starting ensure-sysext.service... Oct 13 05:55:24.029450 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 13 05:55:24.045058 systemd[1]: Reload requested from client PID 1340 ('systemctl') (unit ensure-sysext.service)... Oct 13 05:55:24.045080 systemd[1]: Reloading... Oct 13 05:55:24.053350 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 13 05:55:24.053386 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 13 05:55:24.053670 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 13 05:55:24.054085 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 13 05:55:24.055016 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 13 05:55:24.055282 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Oct 13 05:55:24.055363 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Oct 13 05:55:24.060413 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:55:24.060426 systemd-tmpfiles[1342]: Skipping /boot Oct 13 05:55:24.076505 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Oct 13 05:55:24.076520 systemd-tmpfiles[1342]: Skipping /boot Oct 13 05:55:24.128035 zram_generator::config[1369]: No configuration found. Oct 13 05:55:24.346772 systemd[1]: Reloading finished in 301 ms. Oct 13 05:55:24.367453 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 13 05:55:24.377122 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 13 05:55:24.391414 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:55:24.394614 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 13 05:55:24.409846 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 13 05:55:24.415130 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 13 05:55:24.420400 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 13 05:55:24.427191 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 13 05:55:24.442120 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 13 05:55:24.444524 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 13 05:55:24.452341 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:24.452524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:55:24.455117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:55:24.459569 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:55:24.465210 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:55:24.467176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:55:24.467288 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:55:24.470812 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 13 05:55:24.472624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:24.474077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:55:24.474317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:55:24.477295 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:55:24.477541 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:55:24.480464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:55:24.480687 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:55:24.491120 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Oct 13 05:55:24.492169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:55:24.492397 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:55:24.496439 augenrules[1441]: No rules Oct 13 05:55:24.497507 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 13 05:55:24.500416 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:55:24.500698 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:55:24.507409 systemd[1]: Finished ensure-sysext.service. Oct 13 05:55:24.509296 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 13 05:55:24.512192 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 13 05:55:24.514242 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 13 05:55:24.520917 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:24.521170 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 13 05:55:24.527195 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 13 05:55:24.531193 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 13 05:55:24.537131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 13 05:55:24.540193 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 13 05:55:24.542035 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 13 05:55:24.542080 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 13 05:55:24.553252 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 13 05:55:24.555207 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 13 05:55:24.555239 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 13 05:55:24.569742 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 13 05:55:24.572347 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 13 05:55:24.572611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 13 05:55:24.574782 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 13 05:55:24.575050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 13 05:55:24.577051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 13 05:55:24.577273 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 13 05:55:24.579507 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 13 05:55:24.579715 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 13 05:55:24.597167 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 13 05:55:24.599137 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 13 05:55:24.599230 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 13 05:55:24.608106 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 13 05:55:24.664460 systemd-resolved[1411]: Positive Trust Anchors: Oct 13 05:55:24.664480 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 13 05:55:24.664511 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 13 05:55:24.670256 systemd-resolved[1411]: Defaulting to hostname 'linux'. Oct 13 05:55:24.674993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 13 05:55:24.677541 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 13 05:55:24.679723 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 13 05:55:24.683636 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 13 05:55:24.690050 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 13 05:55:24.700037 kernel: mousedev: PS/2 mouse device common for all mice Oct 13 05:55:24.700129 kernel: ACPI: button: Power Button [PWRF] Oct 13 05:55:24.711587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 13 05:55:24.722903 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 13 05:55:24.723227 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 13 05:55:24.786874 systemd-networkd[1493]: lo: Link UP Oct 13 05:55:24.786893 systemd-networkd[1493]: lo: Gained carrier Oct 13 05:55:24.789687 systemd-networkd[1493]: Enumeration completed Oct 13 05:55:24.789807 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 13 05:55:24.790961 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:24.790967 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 13 05:55:24.791912 systemd[1]: Reached target network.target - Network. Oct 13 05:55:24.793102 systemd-networkd[1493]: eth0: Link UP Oct 13 05:55:24.793250 systemd-networkd[1493]: eth0: Gained carrier Oct 13 05:55:24.793268 systemd-networkd[1493]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 13 05:55:24.795798 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 13 05:55:24.806228 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 13 05:55:24.807058 systemd-networkd[1493]: eth0: DHCPv4 address 10.0.0.145/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 13 05:55:24.813319 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 13 05:55:24.815469 systemd[1]: Reached target sysinit.target - System Initialization. Oct 13 05:55:25.365825 systemd-timesyncd[1486]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 13 05:55:25.365889 systemd-timesyncd[1486]: Initial clock synchronization to Mon 2025-10-13 05:55:25.365713 UTC. Oct 13 05:55:25.366468 systemd-resolved[1411]: Clock change detected. Flushing caches. Oct 13 05:55:25.367237 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 13 05:55:25.370865 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 13 05:55:25.372956 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 13 05:55:25.375126 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 13 05:55:25.377505 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 13 05:55:25.377544 systemd[1]: Reached target paths.target - Path Units. Oct 13 05:55:25.379063 systemd[1]: Reached target time-set.target - System Time Set. Oct 13 05:55:25.380909 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 13 05:55:25.382793 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 13 05:55:25.384822 systemd[1]: Reached target timers.target - Timer Units. Oct 13 05:55:25.388616 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 13 05:55:25.392116 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 13 05:55:25.396718 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 13 05:55:25.398947 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 13 05:55:25.401117 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 13 05:55:25.444014 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 13 05:55:25.446843 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 13 05:55:25.452144 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 13 05:55:25.456057 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 13 05:55:25.475380 systemd[1]: Reached target sockets.target - Socket Units. Oct 13 05:55:25.477895 systemd[1]: Reached target basic.target - Basic System. Oct 13 05:55:25.479613 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:55:25.479718 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 13 05:55:25.481265 systemd[1]: Starting containerd.service - containerd container runtime... Oct 13 05:55:25.486922 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 13 05:55:25.493331 kernel: kvm_amd: TSC scaling supported Oct 13 05:55:25.493436 kernel: kvm_amd: Nested Virtualization enabled Oct 13 05:55:25.493451 kernel: kvm_amd: Nested Paging enabled Oct 13 05:55:25.493463 kernel: kvm_amd: LBR virtualization supported Oct 13 05:55:25.493478 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 13 05:55:25.493491 kernel: kvm_amd: Virtual GIF supported Oct 13 05:55:25.506261 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 13 05:55:25.510487 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 13 05:55:25.514770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 13 05:55:25.516642 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 13 05:55:25.520073 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 13 05:55:25.521111 jq[1532]: false Oct 13 05:55:25.523598 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 13 05:55:25.528923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 13 05:55:25.532515 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 13 05:55:25.535172 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 13 05:55:25.538100 extend-filesystems[1533]: Found /dev/vda6 Oct 13 05:55:25.543075 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 13 05:55:25.545888 extend-filesystems[1533]: Found /dev/vda9 Oct 13 05:55:25.548796 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing passwd entry cache Oct 13 05:55:25.547469 oslogin_cache_refresh[1534]: Refreshing passwd entry cache Oct 13 05:55:25.549283 extend-filesystems[1533]: Checking size of /dev/vda9 Oct 13 05:55:25.553050 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 13 05:55:25.555930 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 13 05:55:25.556638 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 13 05:55:25.558951 systemd[1]: Starting update-engine.service - Update Engine... Oct 13 05:55:25.560948 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting users, quitting Oct 13 05:55:25.561024 oslogin_cache_refresh[1534]: Failure getting users, quitting Oct 13 05:55:25.561116 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:55:25.561154 oslogin_cache_refresh[1534]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 13 05:55:25.561289 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Refreshing group entry cache Oct 13 05:55:25.561935 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 13 05:55:25.562144 oslogin_cache_refresh[1534]: Refreshing group entry cache Oct 13 05:55:25.562722 kernel: EDAC MC: Ver: 3.0.0 Oct 13 05:55:25.570281 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Failure getting groups, quitting Oct 13 05:55:25.570383 oslogin_cache_refresh[1534]: Failure getting groups, quitting Oct 13 05:55:25.570483 google_oslogin_nss_cache[1534]: oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:55:25.570528 oslogin_cache_refresh[1534]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 13 05:55:25.578136 extend-filesystems[1533]: Resized partition /dev/vda9 Oct 13 05:55:25.580298 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 13 05:55:25.580718 jq[1550]: true Oct 13 05:55:25.583002 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 13 05:55:25.584983 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 13 05:55:25.585538 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 13 05:55:25.585867 extend-filesystems[1561]: resize2fs 1.47.3 (8-Jul-2025) Oct 13 05:55:25.588202 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 13 05:55:25.594719 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 13 05:55:25.600938 systemd[1]: motdgen.service: Deactivated successfully. Oct 13 05:55:25.602447 update_engine[1549]: I20251013 05:55:25.595750 1549 main.cc:92] Flatcar Update Engine starting Oct 13 05:55:25.601284 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 13 05:55:25.604564 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 13 05:55:25.605031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 13 05:55:25.638708 tar[1564]: linux-amd64/LICENSE Oct 13 05:55:25.640026 (ntainerd)[1568]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 13 05:55:25.640815 tar[1564]: linux-amd64/helm Oct 13 05:55:25.643698 jq[1567]: true Oct 13 05:55:25.651786 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Oct 13 05:55:25.675180 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 13 05:55:25.652085 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 13 05:55:25.653791 systemd-logind[1543]: New seat seat0. Oct 13 05:55:25.657329 systemd[1]: Started systemd-logind.service - User Login Management. Oct 13 05:55:25.676441 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 13 05:55:25.676441 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 13 05:55:25.676441 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 13 05:55:25.679532 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Oct 13 05:55:25.680420 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 13 05:55:25.680724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 13 05:55:25.683535 dbus-daemon[1530]: [system] SELinux support is enabled Oct 13 05:55:25.689222 update_engine[1549]: I20251013 05:55:25.688841 1549 update_check_scheduler.cc:74] Next update check in 10m39s Oct 13 05:55:25.722635 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Oct 13 05:55:25.860405 containerd[1568]: time="2025-10-13T05:55:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 13 05:55:25.861795 containerd[1568]: time="2025-10-13T05:55:25.861632202Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 13 05:55:25.862482 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 13 05:55:25.867378 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 13 05:55:25.870541 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 13 05:55:25.875434 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 13 05:55:25.876471 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 13 05:55:25.876710 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 13 05:55:25.879414 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 13 05:55:25.879634 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 13 05:55:25.879461 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 13 05:55:25.882333 systemd[1]: Started update-engine.service - Update Engine. Oct 13 05:55:25.882897 containerd[1568]: time="2025-10-13T05:55:25.882851157Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.582µs" Oct 13 05:55:25.882993 containerd[1568]: time="2025-10-13T05:55:25.882962336Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 13 05:55:25.883061 containerd[1568]: time="2025-10-13T05:55:25.883047295Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 13 05:55:25.883344 containerd[1568]: time="2025-10-13T05:55:25.883326899Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 13 05:55:25.883411 containerd[1568]: time="2025-10-13T05:55:25.883397422Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 13 05:55:25.883505 containerd[1568]: time="2025-10-13T05:55:25.883490737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:55:25.883636 containerd[1568]: time="2025-10-13T05:55:25.883616152Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 13 05:55:25.883714 containerd[1568]: time="2025-10-13T05:55:25.883683809Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884126 containerd[1568]: time="2025-10-13T05:55:25.884105780Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884182 containerd[1568]: time="2025-10-13T05:55:25.884170712Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884229 containerd[1568]: time="2025-10-13T05:55:25.884217670Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884288 containerd[1568]: time="2025-10-13T05:55:25.884276159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884437 containerd[1568]: time="2025-10-13T05:55:25.884422283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884757 containerd[1568]: time="2025-10-13T05:55:25.884738677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884842 containerd[1568]: time="2025-10-13T05:55:25.884828385Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 13 05:55:25.884907 containerd[1568]: time="2025-10-13T05:55:25.884877617Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 13 05:55:25.884963 containerd[1568]: time="2025-10-13T05:55:25.884951095Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 13 05:55:25.885387 containerd[1568]: time="2025-10-13T05:55:25.885295150Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 13 05:55:25.885387 containerd[1568]: time="2025-10-13T05:55:25.885363408Z" level=info msg="metadata content store policy set" policy=shared Oct 13 05:55:25.887713 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 13 05:55:25.898529 containerd[1568]: time="2025-10-13T05:55:25.898462078Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898565372Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898608823Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898653397Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898683884Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898721655Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898743476Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898771678Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898788360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 13 05:55:25.898803 containerd[1568]: time="2025-10-13T05:55:25.898807405Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 13 05:55:25.899076 containerd[1568]: time="2025-10-13T05:55:25.898821201Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 13 05:55:25.899076 containerd[1568]: time="2025-10-13T05:55:25.898838995Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 13 05:55:25.899130 containerd[1568]: time="2025-10-13T05:55:25.899075348Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 13 05:55:25.899130 containerd[1568]: time="2025-10-13T05:55:25.899111506Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 13 05:55:25.899179 containerd[1568]: time="2025-10-13T05:55:25.899142484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 13 05:55:25.899205 containerd[1568]: time="2025-10-13T05:55:25.899170576Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 13 05:55:25.899205 containerd[1568]: time="2025-10-13T05:55:25.899195082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 13 05:55:25.899260 containerd[1568]: time="2025-10-13T05:55:25.899218827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 13 05:55:25.899260 containerd[1568]: time="2025-10-13T05:55:25.899246769Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 13 05:55:25.899316 containerd[1568]: time="2025-10-13T05:55:25.899270704Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 13 05:55:25.899316 containerd[1568]: time="2025-10-13T05:55:25.899295280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 13 05:55:25.899372 containerd[1568]: time="2025-10-13T05:55:25.899320327Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 13 05:55:25.899372 containerd[1568]: time="2025-10-13T05:55:25.899346146Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 13 05:55:25.900117 containerd[1568]: time="2025-10-13T05:55:25.899449209Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 13 05:55:25.900117 containerd[1568]: time="2025-10-13T05:55:25.899499022Z" level=info msg="Start snapshots syncer" Oct 13 05:55:25.900117 containerd[1568]: time="2025-10-13T05:55:25.899550729Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.899910864Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900010291Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900142849Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900286379Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900312558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900325983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900341923Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900356540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900370105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900383070Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900426030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900446469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900460996Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900511861Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900538401Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900556755Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900569419Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900584608Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900603343Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900624212Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900645772Z" level=info msg="runtime interface created" Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900652455Z" level=info msg="created NRI interface" Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900661943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900673775Z" level=info msg="Connect containerd service" Oct 13 05:55:25.900812 containerd[1568]: time="2025-10-13T05:55:25.900733747Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 13 05:55:25.902266 containerd[1568]: time="2025-10-13T05:55:25.901589863Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:55:25.993564 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 13 05:55:26.051217 locksmithd[1610]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 13 05:55:26.051828 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 13 05:55:26.058500 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 13 05:55:26.087218 systemd[1]: issuegen.service: Deactivated successfully. Oct 13 05:55:26.087591 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 13 05:55:26.093006 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 13 05:55:26.186107 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 13 05:55:26.192780 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.249982569Z" level=info msg="Start subscribing containerd event" Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250081043Z" level=info msg="Start recovering state" Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250218010Z" level=info msg="Start event monitor" Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250235854Z" level=info msg="Start cni network conf syncer for default" Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250245993Z" level=info msg="Start streaming server" Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250269877Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250279746Z" level=info msg="runtime interface starting up..." Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250288162Z" level=info msg="starting plugins..." Oct 13 05:55:26.250504 containerd[1568]: time="2025-10-13T05:55:26.250311175Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 13 05:55:26.250900 containerd[1568]: time="2025-10-13T05:55:26.250623260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 13 05:55:26.250900 containerd[1568]: time="2025-10-13T05:55:26.250759395Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 13 05:55:26.250955 containerd[1568]: time="2025-10-13T05:55:26.250925958Z" level=info msg="containerd successfully booted in 0.391409s" Oct 13 05:55:26.253199 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 13 05:55:26.255535 systemd[1]: Reached target getty.target - Login Prompts. Oct 13 05:55:26.257653 systemd[1]: Started containerd.service - containerd container runtime. Oct 13 05:55:26.261770 tar[1564]: linux-amd64/README.md Oct 13 05:55:26.299657 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 13 05:55:26.932970 systemd-networkd[1493]: eth0: Gained IPv6LL Oct 13 05:55:26.937837 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 13 05:55:26.940474 systemd[1]: Reached target network-online.target - Network is Online. Oct 13 05:55:26.943740 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 13 05:55:26.946827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:26.958848 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 13 05:55:26.987580 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 13 05:55:26.990188 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 13 05:55:26.990510 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 13 05:55:26.993593 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 13 05:55:28.922706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:28.925272 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 13 05:55:28.927267 systemd[1]: Startup finished in 3.293s (kernel) + 6.515s (initrd) + 6.266s (userspace) = 16.076s. Oct 13 05:55:28.927855 (kubelet)[1673]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:55:29.683756 kubelet[1673]: E1013 05:55:29.683681 1673 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:55:29.687758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:55:29.688000 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:55:29.688419 systemd[1]: kubelet.service: Consumed 2.539s CPU time, 257.3M memory peak. Oct 13 05:55:30.128891 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 13 05:55:30.130271 systemd[1]: Started sshd@0-10.0.0.145:22-10.0.0.1:38856.service - OpenSSH per-connection server daemon (10.0.0.1:38856). Oct 13 05:55:30.218557 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 38856 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:30.220458 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:30.227483 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 13 05:55:30.228636 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 13 05:55:30.235708 systemd-logind[1543]: New session 1 of user core. Oct 13 05:55:30.248584 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 13 05:55:30.251928 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 13 05:55:30.266132 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 13 05:55:30.268491 systemd-logind[1543]: New session c1 of user core. Oct 13 05:55:30.419670 systemd[1691]: Queued start job for default target default.target. Oct 13 05:55:30.431317 systemd[1691]: Created slice app.slice - User Application Slice. Oct 13 05:55:30.431349 systemd[1691]: Reached target paths.target - Paths. Oct 13 05:55:30.431398 systemd[1691]: Reached target timers.target - Timers. Oct 13 05:55:30.433036 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 13 05:55:30.445450 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 13 05:55:30.445592 systemd[1691]: Reached target sockets.target - Sockets. Oct 13 05:55:30.445645 systemd[1691]: Reached target basic.target - Basic System. Oct 13 05:55:30.445705 systemd[1691]: Reached target default.target - Main User Target. Oct 13 05:55:30.445745 systemd[1691]: Startup finished in 170ms. Oct 13 05:55:30.446157 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 13 05:55:30.448033 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 13 05:55:30.512713 systemd[1]: Started sshd@1-10.0.0.145:22-10.0.0.1:38872.service - OpenSSH per-connection server daemon (10.0.0.1:38872). Oct 13 05:55:30.579520 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 38872 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:30.581447 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:30.586377 systemd-logind[1543]: New session 2 of user core. Oct 13 05:55:30.593972 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 13 05:55:30.647329 sshd[1705]: Connection closed by 10.0.0.1 port 38872 Oct 13 05:55:30.647644 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:30.668127 systemd[1]: sshd@1-10.0.0.145:22-10.0.0.1:38872.service: Deactivated successfully. Oct 13 05:55:30.669815 systemd[1]: session-2.scope: Deactivated successfully. Oct 13 05:55:30.670584 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Oct 13 05:55:30.673021 systemd[1]: Started sshd@2-10.0.0.145:22-10.0.0.1:38886.service - OpenSSH per-connection server daemon (10.0.0.1:38886). Oct 13 05:55:30.673531 systemd-logind[1543]: Removed session 2. Oct 13 05:55:30.726608 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 38886 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:30.727867 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:30.731822 systemd-logind[1543]: New session 3 of user core. Oct 13 05:55:30.741820 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 13 05:55:30.790153 sshd[1715]: Connection closed by 10.0.0.1 port 38886 Oct 13 05:55:30.790497 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:30.803666 systemd[1]: sshd@2-10.0.0.145:22-10.0.0.1:38886.service: Deactivated successfully. Oct 13 05:55:30.805351 systemd[1]: session-3.scope: Deactivated successfully. Oct 13 05:55:30.806049 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Oct 13 05:55:30.808335 systemd[1]: Started sshd@3-10.0.0.145:22-10.0.0.1:38898.service - OpenSSH per-connection server daemon (10.0.0.1:38898). Oct 13 05:55:30.808931 systemd-logind[1543]: Removed session 3. Oct 13 05:55:30.865166 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 38898 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:30.866492 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:30.870658 systemd-logind[1543]: New session 4 of user core. Oct 13 05:55:30.881830 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 13 05:55:30.934409 sshd[1724]: Connection closed by 10.0.0.1 port 38898 Oct 13 05:55:30.935885 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:30.950393 systemd[1]: sshd@3-10.0.0.145:22-10.0.0.1:38898.service: Deactivated successfully. Oct 13 05:55:30.952096 systemd[1]: session-4.scope: Deactivated successfully. Oct 13 05:55:30.952905 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Oct 13 05:55:30.955389 systemd[1]: Started sshd@4-10.0.0.145:22-10.0.0.1:38910.service - OpenSSH per-connection server daemon (10.0.0.1:38910). Oct 13 05:55:30.956088 systemd-logind[1543]: Removed session 4. Oct 13 05:55:31.017121 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 38910 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:31.018403 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:31.023080 systemd-logind[1543]: New session 5 of user core. Oct 13 05:55:31.040824 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 13 05:55:31.219436 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 13 05:55:31.219829 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:31.243936 sudo[1734]: pam_unix(sudo:session): session closed for user root Oct 13 05:55:31.245661 sshd[1733]: Connection closed by 10.0.0.1 port 38910 Oct 13 05:55:31.246337 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:31.257501 systemd[1]: sshd@4-10.0.0.145:22-10.0.0.1:38910.service: Deactivated successfully. Oct 13 05:55:31.259184 systemd[1]: session-5.scope: Deactivated successfully. Oct 13 05:55:31.259910 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Oct 13 05:55:31.262142 systemd[1]: Started sshd@5-10.0.0.145:22-10.0.0.1:38916.service - OpenSSH per-connection server daemon (10.0.0.1:38916). Oct 13 05:55:31.263214 systemd-logind[1543]: Removed session 5. Oct 13 05:55:31.325780 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 38916 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:31.327262 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:31.331756 systemd-logind[1543]: New session 6 of user core. Oct 13 05:55:31.341831 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 13 05:55:31.395146 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 13 05:55:31.395448 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:31.401595 sudo[1745]: pam_unix(sudo:session): session closed for user root Oct 13 05:55:31.407553 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 13 05:55:31.407892 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:31.418045 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 13 05:55:31.471051 augenrules[1767]: No rules Oct 13 05:55:31.472589 systemd[1]: audit-rules.service: Deactivated successfully. Oct 13 05:55:31.472902 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 13 05:55:31.474141 sudo[1744]: pam_unix(sudo:session): session closed for user root Oct 13 05:55:31.476878 sshd[1743]: Connection closed by 10.0.0.1 port 38916 Oct 13 05:55:31.476319 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Oct 13 05:55:31.485316 systemd[1]: sshd@5-10.0.0.145:22-10.0.0.1:38916.service: Deactivated successfully. Oct 13 05:55:31.487270 systemd[1]: session-6.scope: Deactivated successfully. Oct 13 05:55:31.488062 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Oct 13 05:55:31.490883 systemd[1]: Started sshd@6-10.0.0.145:22-10.0.0.1:38924.service - OpenSSH per-connection server daemon (10.0.0.1:38924). Oct 13 05:55:31.491618 systemd-logind[1543]: Removed session 6. Oct 13 05:55:31.559361 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 38924 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:55:31.561411 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:55:31.566079 systemd-logind[1543]: New session 7 of user core. Oct 13 05:55:31.579854 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 13 05:55:31.633551 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 13 05:55:31.633887 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 13 05:55:32.407612 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 13 05:55:32.427304 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 13 05:55:33.211235 dockerd[1800]: time="2025-10-13T05:55:33.211125099Z" level=info msg="Starting up" Oct 13 05:55:33.212226 dockerd[1800]: time="2025-10-13T05:55:33.212177492Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 13 05:55:33.236050 dockerd[1800]: time="2025-10-13T05:55:33.235973861Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 13 05:55:33.585645 systemd[1]: var-lib-docker-metacopy\x2dcheck253875487-merged.mount: Deactivated successfully. Oct 13 05:55:33.618199 dockerd[1800]: time="2025-10-13T05:55:33.618137778Z" level=info msg="Loading containers: start." Oct 13 05:55:33.631714 kernel: Initializing XFRM netlink socket Oct 13 05:55:34.002587 systemd-networkd[1493]: docker0: Link UP Oct 13 05:55:34.009590 dockerd[1800]: time="2025-10-13T05:55:34.009544415Z" level=info msg="Loading containers: done." Oct 13 05:55:34.027720 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck382072592-merged.mount: Deactivated successfully. Oct 13 05:55:34.032020 dockerd[1800]: time="2025-10-13T05:55:34.031963771Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 13 05:55:34.032124 dockerd[1800]: time="2025-10-13T05:55:34.032082314Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 13 05:55:34.032224 dockerd[1800]: time="2025-10-13T05:55:34.032199273Z" level=info msg="Initializing buildkit" Oct 13 05:55:34.395924 dockerd[1800]: time="2025-10-13T05:55:34.395722709Z" level=info msg="Completed buildkit initialization" Oct 13 05:55:34.402266 dockerd[1800]: time="2025-10-13T05:55:34.402191268Z" level=info msg="Daemon has completed initialization" Oct 13 05:55:34.402560 dockerd[1800]: time="2025-10-13T05:55:34.402495198Z" level=info msg="API listen on /run/docker.sock" Oct 13 05:55:34.402649 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 13 05:55:35.314869 containerd[1568]: time="2025-10-13T05:55:35.314815508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 13 05:55:35.958547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount957330148.mount: Deactivated successfully. Oct 13 05:55:37.442784 containerd[1568]: time="2025-10-13T05:55:37.442669624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:37.443806 containerd[1568]: time="2025-10-13T05:55:37.443742716Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 13 05:55:37.445554 containerd[1568]: time="2025-10-13T05:55:37.445501534Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:37.448227 containerd[1568]: time="2025-10-13T05:55:37.448197469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:37.451212 containerd[1568]: time="2025-10-13T05:55:37.450514153Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.135658649s" Oct 13 05:55:37.451212 containerd[1568]: time="2025-10-13T05:55:37.450549178Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 13 05:55:37.451634 containerd[1568]: time="2025-10-13T05:55:37.451598175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 13 05:55:38.886021 containerd[1568]: time="2025-10-13T05:55:38.885940186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:38.886907 containerd[1568]: time="2025-10-13T05:55:38.886841396Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 13 05:55:38.888254 containerd[1568]: time="2025-10-13T05:55:38.888209952Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:38.890991 containerd[1568]: time="2025-10-13T05:55:38.890960729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:38.892077 containerd[1568]: time="2025-10-13T05:55:38.892043289Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.440409667s" Oct 13 05:55:38.892077 containerd[1568]: time="2025-10-13T05:55:38.892073716Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 13 05:55:38.892767 containerd[1568]: time="2025-10-13T05:55:38.892731971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 13 05:55:39.938450 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 13 05:55:39.940445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:40.354542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:40.372071 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:55:40.404228 containerd[1568]: time="2025-10-13T05:55:40.404145771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:40.405169 containerd[1568]: time="2025-10-13T05:55:40.405113826Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 13 05:55:40.406202 containerd[1568]: time="2025-10-13T05:55:40.406163494Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:40.408939 containerd[1568]: time="2025-10-13T05:55:40.408883955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:40.411110 containerd[1568]: time="2025-10-13T05:55:40.411061368Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.51829322s" Oct 13 05:55:40.411110 containerd[1568]: time="2025-10-13T05:55:40.411105190Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 13 05:55:40.411726 containerd[1568]: time="2025-10-13T05:55:40.411681671Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 13 05:55:40.480820 kubelet[2093]: E1013 05:55:40.480745 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:55:40.487363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:55:40.487570 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:55:40.487980 systemd[1]: kubelet.service: Consumed 503ms CPU time, 110.6M memory peak. Oct 13 05:55:41.944332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2454811106.mount: Deactivated successfully. Oct 13 05:55:42.730609 containerd[1568]: time="2025-10-13T05:55:42.730518048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:42.801975 containerd[1568]: time="2025-10-13T05:55:42.801875073Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 13 05:55:42.874086 containerd[1568]: time="2025-10-13T05:55:42.873985099Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:42.908935 containerd[1568]: time="2025-10-13T05:55:42.908863917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:42.909481 containerd[1568]: time="2025-10-13T05:55:42.909448743Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.497727649s" Oct 13 05:55:42.909481 containerd[1568]: time="2025-10-13T05:55:42.909479301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 13 05:55:42.910100 containerd[1568]: time="2025-10-13T05:55:42.910061312Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 13 05:55:43.583616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3902014940.mount: Deactivated successfully. Oct 13 05:55:45.160190 containerd[1568]: time="2025-10-13T05:55:45.160048808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:45.160929 containerd[1568]: time="2025-10-13T05:55:45.160618216Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 13 05:55:45.161981 containerd[1568]: time="2025-10-13T05:55:45.161939192Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:45.165618 containerd[1568]: time="2025-10-13T05:55:45.165547538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:45.166937 containerd[1568]: time="2025-10-13T05:55:45.166873354Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.256778198s" Oct 13 05:55:45.166937 containerd[1568]: time="2025-10-13T05:55:45.166929740Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 13 05:55:45.167900 containerd[1568]: time="2025-10-13T05:55:45.167831901Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 13 05:55:47.605439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451321802.mount: Deactivated successfully. Oct 13 05:55:47.617322 containerd[1568]: time="2025-10-13T05:55:47.617253817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:47.618290 containerd[1568]: time="2025-10-13T05:55:47.618252569Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 13 05:55:47.619597 containerd[1568]: time="2025-10-13T05:55:47.619563808Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:47.622276 containerd[1568]: time="2025-10-13T05:55:47.622219457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:47.622836 containerd[1568]: time="2025-10-13T05:55:47.622802921Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 2.454889357s" Oct 13 05:55:47.622836 containerd[1568]: time="2025-10-13T05:55:47.622830383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 13 05:55:47.623502 containerd[1568]: time="2025-10-13T05:55:47.623474751Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 13 05:55:50.738228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 13 05:55:50.740214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:51.025356 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:51.039047 (kubelet)[2213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 13 05:55:51.149150 kubelet[2213]: E1013 05:55:51.149075 2213 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 13 05:55:51.153553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 13 05:55:51.153776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 13 05:55:51.154168 systemd[1]: kubelet.service: Consumed 380ms CPU time, 112.5M memory peak. Oct 13 05:55:51.243357 containerd[1568]: time="2025-10-13T05:55:51.243287649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:51.244226 containerd[1568]: time="2025-10-13T05:55:51.244168030Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 13 05:55:51.245390 containerd[1568]: time="2025-10-13T05:55:51.245353943Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:51.248001 containerd[1568]: time="2025-10-13T05:55:51.247969828Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:55:51.248926 containerd[1568]: time="2025-10-13T05:55:51.248881267Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.625380157s" Oct 13 05:55:51.248926 containerd[1568]: time="2025-10-13T05:55:51.248926392Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 13 05:55:54.809111 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:54.809275 systemd[1]: kubelet.service: Consumed 380ms CPU time, 112.5M memory peak. Oct 13 05:55:54.811354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:54.838456 systemd[1]: Reload requested from client PID 2254 ('systemctl') (unit session-7.scope)... Oct 13 05:55:54.838473 systemd[1]: Reloading... Oct 13 05:55:54.927803 zram_generator::config[2299]: No configuration found. Oct 13 05:55:55.185014 systemd[1]: Reloading finished in 346 ms. Oct 13 05:55:55.265601 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 13 05:55:55.265757 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 13 05:55:55.266196 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:55.266257 systemd[1]: kubelet.service: Consumed 170ms CPU time, 98.1M memory peak. Oct 13 05:55:55.268174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:55:55.565975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:55:55.580057 (kubelet)[2344]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:55:55.625614 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:55:55.625614 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:55:55.626053 kubelet[2344]: I1013 05:55:55.625670 2344 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:55:56.430163 kubelet[2344]: I1013 05:55:56.430081 2344 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:55:56.430163 kubelet[2344]: I1013 05:55:56.430124 2344 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:55:56.431453 kubelet[2344]: I1013 05:55:56.431408 2344 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:55:56.431453 kubelet[2344]: I1013 05:55:56.431430 2344 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:55:56.432001 kubelet[2344]: I1013 05:55:56.431965 2344 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:55:56.445216 kubelet[2344]: E1013 05:55:56.445126 2344 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.145:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 13 05:55:56.445396 kubelet[2344]: I1013 05:55:56.445220 2344 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:55:56.450548 kubelet[2344]: I1013 05:55:56.450496 2344 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:55:56.456811 kubelet[2344]: I1013 05:55:56.456777 2344 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:55:56.457815 kubelet[2344]: I1013 05:55:56.457740 2344 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:55:56.457997 kubelet[2344]: I1013 05:55:56.457787 2344 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:55:56.457997 kubelet[2344]: I1013 05:55:56.457993 2344 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:55:56.457997 kubelet[2344]: I1013 05:55:56.458003 2344 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:55:56.458296 kubelet[2344]: I1013 05:55:56.458136 2344 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:55:56.462953 kubelet[2344]: I1013 05:55:56.462925 2344 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:55:56.463203 kubelet[2344]: I1013 05:55:56.463156 2344 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:55:56.463203 kubelet[2344]: I1013 05:55:56.463176 2344 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:55:56.463203 kubelet[2344]: I1013 05:55:56.463209 2344 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:55:56.463481 kubelet[2344]: I1013 05:55:56.463453 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:55:56.464364 kubelet[2344]: E1013 05:55:56.464304 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:55:56.464498 kubelet[2344]: E1013 05:55:56.464436 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:55:56.468151 kubelet[2344]: I1013 05:55:56.468111 2344 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:55:56.469497 kubelet[2344]: I1013 05:55:56.469452 2344 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:55:56.469552 kubelet[2344]: I1013 05:55:56.469492 2344 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:55:56.469666 kubelet[2344]: W1013 05:55:56.469625 2344 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 13 05:55:56.476471 kubelet[2344]: I1013 05:55:56.476429 2344 server.go:1262] "Started kubelet" Oct 13 05:55:56.476574 kubelet[2344]: I1013 05:55:56.476528 2344 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:55:56.485186 kubelet[2344]: I1013 05:55:56.484983 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:55:56.486966 kubelet[2344]: I1013 05:55:56.486031 2344 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:55:56.486966 kubelet[2344]: I1013 05:55:56.486663 2344 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:55:56.489578 kubelet[2344]: I1013 05:55:56.489380 2344 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:55:56.491631 kubelet[2344]: I1013 05:55:56.489435 2344 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:55:56.491714 kubelet[2344]: E1013 05:55:56.489654 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:55:56.491714 kubelet[2344]: I1013 05:55:56.490220 2344 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:55:56.491790 kubelet[2344]: I1013 05:55:56.491730 2344 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:55:56.492731 kubelet[2344]: E1013 05:55:56.492524 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="200ms" Oct 13 05:55:56.492808 kubelet[2344]: E1013 05:55:56.492661 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:55:56.492993 kubelet[2344]: I1013 05:55:56.492957 2344 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:55:56.493153 kubelet[2344]: I1013 05:55:56.493090 2344 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:55:56.494414 kubelet[2344]: I1013 05:55:56.494380 2344 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:55:56.494552 kubelet[2344]: I1013 05:55:56.494496 2344 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:55:56.495046 kubelet[2344]: E1013 05:55:56.493878 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df750f67a10b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:55:56.476371129 +0000 UTC m=+0.889319444,LastTimestamp:2025-10-13 05:55:56.476371129 +0000 UTC m=+0.889319444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:55:56.496189 kubelet[2344]: E1013 05:55:56.496158 2344 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:55:56.496450 kubelet[2344]: I1013 05:55:56.496433 2344 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:55:56.508258 kubelet[2344]: I1013 05:55:56.508222 2344 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:55:56.508258 kubelet[2344]: I1013 05:55:56.508251 2344 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:55:56.508339 kubelet[2344]: I1013 05:55:56.508274 2344 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:55:56.511015 kubelet[2344]: I1013 05:55:56.510985 2344 policy_none.go:49] "None policy: Start" Oct 13 05:55:56.511063 kubelet[2344]: I1013 05:55:56.511018 2344 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:55:56.511063 kubelet[2344]: I1013 05:55:56.511036 2344 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:55:56.513556 kubelet[2344]: I1013 05:55:56.513537 2344 policy_none.go:47] "Start" Oct 13 05:55:56.515965 kubelet[2344]: I1013 05:55:56.515938 2344 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:55:56.518086 kubelet[2344]: I1013 05:55:56.517736 2344 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:55:56.518086 kubelet[2344]: I1013 05:55:56.517793 2344 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:55:56.518086 kubelet[2344]: I1013 05:55:56.517995 2344 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:55:56.518086 kubelet[2344]: E1013 05:55:56.518060 2344 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:55:56.518515 kubelet[2344]: E1013 05:55:56.518475 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:55:56.520810 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 13 05:55:56.536891 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 13 05:55:56.541120 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 13 05:55:56.561978 kubelet[2344]: E1013 05:55:56.561862 2344 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:55:56.562355 kubelet[2344]: I1013 05:55:56.562319 2344 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:55:56.562402 kubelet[2344]: I1013 05:55:56.562337 2344 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:55:56.562728 kubelet[2344]: I1013 05:55:56.562590 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:55:56.564136 kubelet[2344]: E1013 05:55:56.564104 2344 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:55:56.564272 kubelet[2344]: E1013 05:55:56.564190 2344 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 13 05:55:56.633098 systemd[1]: Created slice kubepods-burstable-pod6b095e76f34565688c8f3828d02bc7a8.slice - libcontainer container kubepods-burstable-pod6b095e76f34565688c8f3828d02bc7a8.slice. Oct 13 05:55:56.663312 kubelet[2344]: E1013 05:55:56.663251 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:56.664121 kubelet[2344]: I1013 05:55:56.664093 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:56.665661 kubelet[2344]: E1013 05:55:56.664773 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 13 05:55:56.668954 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 13 05:55:56.678139 kubelet[2344]: E1013 05:55:56.678090 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:56.681223 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 13 05:55:56.683463 kubelet[2344]: E1013 05:55:56.683428 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:56.693065 kubelet[2344]: E1013 05:55:56.693038 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="400ms" Oct 13 05:55:56.694301 kubelet[2344]: I1013 05:55:56.694256 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b095e76f34565688c8f3828d02bc7a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b095e76f34565688c8f3828d02bc7a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:56.694301 kubelet[2344]: I1013 05:55:56.694286 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b095e76f34565688c8f3828d02bc7a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6b095e76f34565688c8f3828d02bc7a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:56.694402 kubelet[2344]: I1013 05:55:56.694309 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:56.694402 kubelet[2344]: I1013 05:55:56.694332 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:56.694402 kubelet[2344]: I1013 05:55:56.694349 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:56.694402 kubelet[2344]: I1013 05:55:56.694362 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:55:56.694402 kubelet[2344]: I1013 05:55:56.694377 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b095e76f34565688c8f3828d02bc7a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b095e76f34565688c8f3828d02bc7a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:55:56.694557 kubelet[2344]: I1013 05:55:56.694402 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:56.694557 kubelet[2344]: I1013 05:55:56.694447 2344 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:55:56.867221 kubelet[2344]: I1013 05:55:56.867187 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:56.867643 kubelet[2344]: E1013 05:55:56.867611 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 13 05:55:56.968899 containerd[1568]: time="2025-10-13T05:55:56.968786137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6b095e76f34565688c8f3828d02bc7a8,Namespace:kube-system,Attempt:0,}" Oct 13 05:55:56.985350 containerd[1568]: time="2025-10-13T05:55:56.985315259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 13 05:55:56.987820 containerd[1568]: time="2025-10-13T05:55:56.987775382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 13 05:55:57.094187 kubelet[2344]: E1013 05:55:57.094139 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="800ms" Oct 13 05:55:57.269719 kubelet[2344]: I1013 05:55:57.269529 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:57.270067 kubelet[2344]: E1013 05:55:57.270009 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 13 05:55:57.319388 kubelet[2344]: E1013 05:55:57.319332 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.145:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 13 05:55:57.500160 kubelet[2344]: E1013 05:55:57.500091 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.145:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 13 05:55:57.519671 kubelet[2344]: E1013 05:55:57.519615 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.145:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 13 05:55:57.538274 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745191371.mount: Deactivated successfully. Oct 13 05:55:57.546922 containerd[1568]: time="2025-10-13T05:55:57.546860637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:57.549576 containerd[1568]: time="2025-10-13T05:55:57.549446085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 13 05:55:57.554063 containerd[1568]: time="2025-10-13T05:55:57.554023327Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:57.555111 containerd[1568]: time="2025-10-13T05:55:57.555057546Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:57.555833 containerd[1568]: time="2025-10-13T05:55:57.555791953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:57.556731 containerd[1568]: time="2025-10-13T05:55:57.556670621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:55:57.557515 containerd[1568]: time="2025-10-13T05:55:57.557424965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 13 05:55:57.558612 containerd[1568]: time="2025-10-13T05:55:57.558574100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 13 05:55:57.559497 containerd[1568]: time="2025-10-13T05:55:57.559453108Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 567.905512ms" Oct 13 05:55:57.562240 containerd[1568]: time="2025-10-13T05:55:57.562200209Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 573.396338ms" Oct 13 05:55:57.562982 containerd[1568]: time="2025-10-13T05:55:57.562953050Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 591.191924ms" Oct 13 05:55:57.599105 containerd[1568]: time="2025-10-13T05:55:57.598706558Z" level=info msg="connecting to shim e43f802ea4aceabe87a2759758c32e0f6ad4ca6e6eb815157a582784af1200a5" address="unix:///run/containerd/s/5578815a268c7f4f5c9c4f333e41762272b3c29539597584b2881a6c8e1940a3" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:55:57.613437 containerd[1568]: time="2025-10-13T05:55:57.613392234Z" level=info msg="connecting to shim 4f6fcc574f1fe76838dbe788b506cdd77fee4d96df8be15cacf09c8295d6a186" address="unix:///run/containerd/s/5dc5b2bd66cd60ca2178209ca8b2e70b6372b8f67ef8d2b073b771997d40a971" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:55:57.619921 containerd[1568]: time="2025-10-13T05:55:57.619865080Z" level=info msg="connecting to shim 96cb11a073485a259c504a555e82273b095b4a3f96ef7e9cc06ad4554f304fce" address="unix:///run/containerd/s/f9c0abc9737713f03037ca07475209fc8b1f16653a41cd936093bf92158131eb" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:55:57.650910 systemd[1]: Started cri-containerd-e43f802ea4aceabe87a2759758c32e0f6ad4ca6e6eb815157a582784af1200a5.scope - libcontainer container e43f802ea4aceabe87a2759758c32e0f6ad4ca6e6eb815157a582784af1200a5. Oct 13 05:55:57.710864 systemd[1]: Started cri-containerd-4f6fcc574f1fe76838dbe788b506cdd77fee4d96df8be15cacf09c8295d6a186.scope - libcontainer container 4f6fcc574f1fe76838dbe788b506cdd77fee4d96df8be15cacf09c8295d6a186. Oct 13 05:55:57.713721 systemd[1]: Started cri-containerd-96cb11a073485a259c504a555e82273b095b4a3f96ef7e9cc06ad4554f304fce.scope - libcontainer container 96cb11a073485a259c504a555e82273b095b4a3f96ef7e9cc06ad4554f304fce. Oct 13 05:55:57.747467 kubelet[2344]: E1013 05:55:57.747406 2344 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.145:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 13 05:55:57.799057 containerd[1568]: time="2025-10-13T05:55:57.798840519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"e43f802ea4aceabe87a2759758c32e0f6ad4ca6e6eb815157a582784af1200a5\"" Oct 13 05:55:57.811482 containerd[1568]: time="2025-10-13T05:55:57.811424574Z" level=info msg="CreateContainer within sandbox \"e43f802ea4aceabe87a2759758c32e0f6ad4ca6e6eb815157a582784af1200a5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 13 05:55:57.824134 containerd[1568]: time="2025-10-13T05:55:57.824069333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f6fcc574f1fe76838dbe788b506cdd77fee4d96df8be15cacf09c8295d6a186\"" Oct 13 05:55:57.829304 containerd[1568]: time="2025-10-13T05:55:57.829259565Z" level=info msg="Container 660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:55:57.830713 containerd[1568]: time="2025-10-13T05:55:57.830661544Z" level=info msg="CreateContainer within sandbox \"4f6fcc574f1fe76838dbe788b506cdd77fee4d96df8be15cacf09c8295d6a186\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 13 05:55:57.831221 containerd[1568]: time="2025-10-13T05:55:57.831185225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6b095e76f34565688c8f3828d02bc7a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"96cb11a073485a259c504a555e82273b095b4a3f96ef7e9cc06ad4554f304fce\"" Oct 13 05:55:57.837951 containerd[1568]: time="2025-10-13T05:55:57.837923359Z" level=info msg="CreateContainer within sandbox \"96cb11a073485a259c504a555e82273b095b4a3f96ef7e9cc06ad4554f304fce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 13 05:55:57.842625 containerd[1568]: time="2025-10-13T05:55:57.842569892Z" level=info msg="CreateContainer within sandbox \"e43f802ea4aceabe87a2759758c32e0f6ad4ca6e6eb815157a582784af1200a5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9\"" Oct 13 05:55:57.843186 containerd[1568]: time="2025-10-13T05:55:57.843136153Z" level=info msg="StartContainer for \"660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9\"" Oct 13 05:55:57.844191 containerd[1568]: time="2025-10-13T05:55:57.844165493Z" level=info msg="connecting to shim 660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9" address="unix:///run/containerd/s/5578815a268c7f4f5c9c4f333e41762272b3c29539597584b2881a6c8e1940a3" protocol=ttrpc version=3 Oct 13 05:55:57.846720 containerd[1568]: time="2025-10-13T05:55:57.846683344Z" level=info msg="Container 9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:55:57.855416 containerd[1568]: time="2025-10-13T05:55:57.855351407Z" level=info msg="CreateContainer within sandbox \"4f6fcc574f1fe76838dbe788b506cdd77fee4d96df8be15cacf09c8295d6a186\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f\"" Oct 13 05:55:57.856231 containerd[1568]: time="2025-10-13T05:55:57.856201661Z" level=info msg="StartContainer for \"9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f\"" Oct 13 05:55:57.857980 containerd[1568]: time="2025-10-13T05:55:57.857954949Z" level=info msg="connecting to shim 9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f" address="unix:///run/containerd/s/5dc5b2bd66cd60ca2178209ca8b2e70b6372b8f67ef8d2b073b771997d40a971" protocol=ttrpc version=3 Oct 13 05:55:57.858918 containerd[1568]: time="2025-10-13T05:55:57.858880053Z" level=info msg="Container 891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:55:57.867529 containerd[1568]: time="2025-10-13T05:55:57.867340316Z" level=info msg="CreateContainer within sandbox \"96cb11a073485a259c504a555e82273b095b4a3f96ef7e9cc06ad4554f304fce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c\"" Oct 13 05:55:57.868503 containerd[1568]: time="2025-10-13T05:55:57.868466167Z" level=info msg="StartContainer for \"891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c\"" Oct 13 05:55:57.871364 containerd[1568]: time="2025-10-13T05:55:57.871098964Z" level=info msg="connecting to shim 891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c" address="unix:///run/containerd/s/f9c0abc9737713f03037ca07475209fc8b1f16653a41cd936093bf92158131eb" protocol=ttrpc version=3 Oct 13 05:55:57.873913 systemd[1]: Started cri-containerd-660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9.scope - libcontainer container 660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9. Oct 13 05:55:57.916973 systemd[1]: Started cri-containerd-9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f.scope - libcontainer container 9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f. Oct 13 05:55:57.920253 kubelet[2344]: E1013 05:55:57.916977 2344 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.145:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.145:6443: connect: connection refused" interval="1.6s" Oct 13 05:55:57.920506 kubelet[2344]: E1013 05:55:57.920406 2344 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.145:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.145:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186df750f67a10b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-13 05:55:56.476371129 +0000 UTC m=+0.889319444,LastTimestamp:2025-10-13 05:55:56.476371129 +0000 UTC m=+0.889319444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 13 05:55:57.942876 systemd[1]: Started cri-containerd-891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c.scope - libcontainer container 891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c. Oct 13 05:55:58.004457 containerd[1568]: time="2025-10-13T05:55:58.004406634Z" level=info msg="StartContainer for \"660b2915b42e178812adf73b7c86bc4a3028fe18f4eaa4ac34d634c37a64a8e9\" returns successfully" Oct 13 05:55:58.073203 kubelet[2344]: I1013 05:55:58.072773 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:55:58.073738 kubelet[2344]: E1013 05:55:58.073666 2344 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.145:6443/api/v1/nodes\": dial tcp 10.0.0.145:6443: connect: connection refused" node="localhost" Oct 13 05:55:58.234131 containerd[1568]: time="2025-10-13T05:55:58.234020006Z" level=info msg="StartContainer for \"9bfd6aa2992bd7aa36d601904a3e815a2118ec1bb8c0f385445bc4eb42c9bf1f\" returns successfully" Oct 13 05:55:58.234576 containerd[1568]: time="2025-10-13T05:55:58.234519385Z" level=info msg="StartContainer for \"891d731f1d802eaf43951ec9e7d05d60d326db96fec9a5426ab1fc074864de6c\" returns successfully" Oct 13 05:55:58.526363 kubelet[2344]: E1013 05:55:58.526229 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:58.528050 kubelet[2344]: E1013 05:55:58.528023 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:58.530348 kubelet[2344]: E1013 05:55:58.530324 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:59.533573 kubelet[2344]: E1013 05:55:59.533531 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:59.534104 kubelet[2344]: E1013 05:55:59.533900 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:55:59.678582 kubelet[2344]: I1013 05:55:59.676511 2344 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:56:00.158748 kubelet[2344]: E1013 05:56:00.157822 2344 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 13 05:56:00.346556 kubelet[2344]: I1013 05:56:00.346406 2344 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:56:00.346556 kubelet[2344]: E1013 05:56:00.346443 2344 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 13 05:56:00.357281 kubelet[2344]: E1013 05:56:00.357223 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:56:00.457976 kubelet[2344]: E1013 05:56:00.457799 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:56:00.534885 kubelet[2344]: E1013 05:56:00.534848 2344 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 13 05:56:00.558312 kubelet[2344]: E1013 05:56:00.558229 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:56:00.658885 kubelet[2344]: E1013 05:56:00.658815 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:56:00.760080 kubelet[2344]: E1013 05:56:00.759904 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:56:00.860951 kubelet[2344]: E1013 05:56:00.860874 2344 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 13 05:56:00.991575 kubelet[2344]: I1013 05:56:00.991501 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:00.999346 kubelet[2344]: E1013 05:56:00.999298 2344 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:00.999346 kubelet[2344]: I1013 05:56:00.999335 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:01.001413 kubelet[2344]: E1013 05:56:01.001369 2344 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:01.001413 kubelet[2344]: I1013 05:56:01.001405 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:01.003335 kubelet[2344]: E1013 05:56:01.003297 2344 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:01.148012 kubelet[2344]: I1013 05:56:01.147973 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:01.378927 kubelet[2344]: I1013 05:56:01.378887 2344 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:01.467012 kubelet[2344]: I1013 05:56:01.466832 2344 apiserver.go:52] "Watching apiserver" Oct 13 05:56:01.491910 kubelet[2344]: I1013 05:56:01.491844 2344 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:56:03.026972 systemd[1]: Reload requested from client PID 2635 ('systemctl') (unit session-7.scope)... Oct 13 05:56:03.026988 systemd[1]: Reloading... Oct 13 05:56:03.122089 zram_generator::config[2681]: No configuration found. Oct 13 05:56:03.397093 systemd[1]: Reloading finished in 369 ms. Oct 13 05:56:03.428853 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:56:03.436186 systemd[1]: kubelet.service: Deactivated successfully. Oct 13 05:56:03.436545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:56:03.436602 systemd[1]: kubelet.service: Consumed 961ms CPU time, 128.9M memory peak. Oct 13 05:56:03.438659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 13 05:56:03.676171 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 13 05:56:03.687064 (kubelet)[2723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 13 05:56:03.733243 kubelet[2723]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 13 05:56:03.733243 kubelet[2723]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 13 05:56:03.733814 kubelet[2723]: I1013 05:56:03.733779 2723 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 13 05:56:03.740539 kubelet[2723]: I1013 05:56:03.740498 2723 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 13 05:56:03.740539 kubelet[2723]: I1013 05:56:03.740522 2723 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 13 05:56:03.740646 kubelet[2723]: I1013 05:56:03.740557 2723 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 13 05:56:03.740646 kubelet[2723]: I1013 05:56:03.740573 2723 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 13 05:56:03.740783 kubelet[2723]: I1013 05:56:03.740763 2723 server.go:956] "Client rotation is on, will bootstrap in background" Oct 13 05:56:03.741872 kubelet[2723]: I1013 05:56:03.741852 2723 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 13 05:56:03.743955 kubelet[2723]: I1013 05:56:03.743938 2723 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 13 05:56:03.748722 kubelet[2723]: I1013 05:56:03.746818 2723 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 13 05:56:03.753419 kubelet[2723]: I1013 05:56:03.753362 2723 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 13 05:56:03.753675 kubelet[2723]: I1013 05:56:03.753629 2723 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 13 05:56:03.753908 kubelet[2723]: I1013 05:56:03.753665 2723 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 13 05:56:03.754019 kubelet[2723]: I1013 05:56:03.753911 2723 topology_manager.go:138] "Creating topology manager with none policy" Oct 13 05:56:03.754019 kubelet[2723]: I1013 05:56:03.753926 2723 container_manager_linux.go:306] "Creating device plugin manager" Oct 13 05:56:03.754019 kubelet[2723]: I1013 05:56:03.753957 2723 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 13 05:56:03.754932 kubelet[2723]: I1013 05:56:03.754896 2723 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:56:03.755119 kubelet[2723]: I1013 05:56:03.755099 2723 kubelet.go:475] "Attempting to sync node with API server" Oct 13 05:56:03.755119 kubelet[2723]: I1013 05:56:03.755118 2723 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 13 05:56:03.755169 kubelet[2723]: I1013 05:56:03.755147 2723 kubelet.go:387] "Adding apiserver pod source" Oct 13 05:56:03.755194 kubelet[2723]: I1013 05:56:03.755181 2723 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 13 05:56:03.757871 kubelet[2723]: I1013 05:56:03.757841 2723 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 13 05:56:03.758506 kubelet[2723]: I1013 05:56:03.758453 2723 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 13 05:56:03.758506 kubelet[2723]: I1013 05:56:03.758484 2723 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 13 05:56:03.763719 kubelet[2723]: I1013 05:56:03.761708 2723 server.go:1262] "Started kubelet" Oct 13 05:56:03.763719 kubelet[2723]: I1013 05:56:03.762336 2723 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 13 05:56:03.763719 kubelet[2723]: I1013 05:56:03.762494 2723 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 13 05:56:03.763719 kubelet[2723]: I1013 05:56:03.762873 2723 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 13 05:56:03.763719 kubelet[2723]: I1013 05:56:03.763717 2723 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 13 05:56:03.763963 kubelet[2723]: I1013 05:56:03.763802 2723 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 13 05:56:03.768699 kubelet[2723]: I1013 05:56:03.768655 2723 server.go:310] "Adding debug handlers to kubelet server" Oct 13 05:56:03.771359 kubelet[2723]: I1013 05:56:03.771341 2723 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 13 05:56:03.772413 kubelet[2723]: I1013 05:56:03.772381 2723 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 13 05:56:03.778015 kubelet[2723]: I1013 05:56:03.777986 2723 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 13 05:56:03.779847 kubelet[2723]: I1013 05:56:03.779827 2723 reconciler.go:29] "Reconciler: start to sync state" Oct 13 05:56:03.781739 kubelet[2723]: I1013 05:56:03.780255 2723 factory.go:223] Registration of the containerd container factory successfully Oct 13 05:56:03.782917 kubelet[2723]: I1013 05:56:03.782897 2723 factory.go:223] Registration of the systemd container factory successfully Oct 13 05:56:03.783014 kubelet[2723]: E1013 05:56:03.780263 2723 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 13 05:56:03.783014 kubelet[2723]: I1013 05:56:03.781635 2723 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 13 05:56:03.783545 kubelet[2723]: I1013 05:56:03.783516 2723 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 13 05:56:03.785064 kubelet[2723]: I1013 05:56:03.785020 2723 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 13 05:56:03.785064 kubelet[2723]: I1013 05:56:03.785063 2723 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 13 05:56:03.785137 kubelet[2723]: I1013 05:56:03.785099 2723 kubelet.go:2427] "Starting kubelet main sync loop" Oct 13 05:56:03.785199 kubelet[2723]: E1013 05:56:03.785168 2723 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 13 05:56:03.818309 kubelet[2723]: I1013 05:56:03.818258 2723 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 13 05:56:03.818309 kubelet[2723]: I1013 05:56:03.818276 2723 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 13 05:56:03.818309 kubelet[2723]: I1013 05:56:03.818303 2723 state_mem.go:36] "Initialized new in-memory state store" Oct 13 05:56:03.818519 kubelet[2723]: I1013 05:56:03.818433 2723 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 13 05:56:03.818519 kubelet[2723]: I1013 05:56:03.818441 2723 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 13 05:56:03.818519 kubelet[2723]: I1013 05:56:03.818459 2723 policy_none.go:49] "None policy: Start" Oct 13 05:56:03.818519 kubelet[2723]: I1013 05:56:03.818471 2723 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 13 05:56:03.818519 kubelet[2723]: I1013 05:56:03.818482 2723 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 13 05:56:03.818620 kubelet[2723]: I1013 05:56:03.818565 2723 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 13 05:56:03.818620 kubelet[2723]: I1013 05:56:03.818576 2723 policy_none.go:47] "Start" Oct 13 05:56:03.822733 kubelet[2723]: E1013 05:56:03.822708 2723 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 13 05:56:03.823034 kubelet[2723]: I1013 05:56:03.823008 2723 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 13 05:56:03.823084 kubelet[2723]: I1013 05:56:03.823035 2723 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 13 05:56:03.823460 kubelet[2723]: I1013 05:56:03.823432 2723 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 13 05:56:03.828125 kubelet[2723]: E1013 05:56:03.828079 2723 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 13 05:56:03.886394 kubelet[2723]: I1013 05:56:03.886156 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:03.886394 kubelet[2723]: I1013 05:56:03.886255 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:03.886394 kubelet[2723]: I1013 05:56:03.886273 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:03.893212 kubelet[2723]: E1013 05:56:03.893153 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:03.893390 kubelet[2723]: E1013 05:56:03.893369 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:03.931168 kubelet[2723]: I1013 05:56:03.931015 2723 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 13 05:56:03.938833 kubelet[2723]: I1013 05:56:03.938787 2723 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 13 05:56:03.938917 kubelet[2723]: I1013 05:56:03.938868 2723 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 13 05:56:03.982573 kubelet[2723]: I1013 05:56:03.982498 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:03.982573 kubelet[2723]: I1013 05:56:03.982553 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:03.982573 kubelet[2723]: I1013 05:56:03.982584 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:03.982852 kubelet[2723]: I1013 05:56:03.982646 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:03.982852 kubelet[2723]: I1013 05:56:03.982757 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:03.982852 kubelet[2723]: I1013 05:56:03.982783 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6b095e76f34565688c8f3828d02bc7a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b095e76f34565688c8f3828d02bc7a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:03.982852 kubelet[2723]: I1013 05:56:03.982830 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6b095e76f34565688c8f3828d02bc7a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6b095e76f34565688c8f3828d02bc7a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:03.982978 kubelet[2723]: I1013 05:56:03.982874 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6b095e76f34565688c8f3828d02bc7a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6b095e76f34565688c8f3828d02bc7a8\") " pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:03.982978 kubelet[2723]: I1013 05:56:03.982906 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:04.756652 kubelet[2723]: I1013 05:56:04.756592 2723 apiserver.go:52] "Watching apiserver" Oct 13 05:56:04.779197 kubelet[2723]: I1013 05:56:04.779146 2723 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 13 05:56:04.802091 kubelet[2723]: I1013 05:56:04.802037 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:04.802675 kubelet[2723]: I1013 05:56:04.802653 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:04.803005 kubelet[2723]: I1013 05:56:04.802987 2723 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:04.856792 kubelet[2723]: E1013 05:56:04.856219 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 13 05:56:04.856970 kubelet[2723]: E1013 05:56:04.856898 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 13 05:56:04.857272 kubelet[2723]: E1013 05:56:04.857251 2723 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 13 05:56:04.881750 kubelet[2723]: I1013 05:56:04.881669 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.881649345 podStartE2EDuration="3.881649345s" podCreationTimestamp="2025-10-13 05:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:04.881448593 +0000 UTC m=+1.190406157" watchObservedRunningTime="2025-10-13 05:56:04.881649345 +0000 UTC m=+1.190606909" Oct 13 05:56:05.075597 kubelet[2723]: I1013 05:56:05.075481 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.07546221 podStartE2EDuration="4.07546221s" podCreationTimestamp="2025-10-13 05:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:05.075148655 +0000 UTC m=+1.384106229" watchObservedRunningTime="2025-10-13 05:56:05.07546221 +0000 UTC m=+1.384419774" Oct 13 05:56:08.693985 kubelet[2723]: I1013 05:56:08.693924 2723 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 13 05:56:08.694445 containerd[1568]: time="2025-10-13T05:56:08.694215557Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 13 05:56:08.694890 kubelet[2723]: I1013 05:56:08.694549 2723 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 13 05:56:09.146631 kubelet[2723]: I1013 05:56:09.146535 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.146480774 podStartE2EDuration="6.146480774s" podCreationTimestamp="2025-10-13 05:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:05.172481396 +0000 UTC m=+1.481438980" watchObservedRunningTime="2025-10-13 05:56:09.146480774 +0000 UTC m=+5.455438358" Oct 13 05:56:09.527506 systemd[1]: Created slice kubepods-besteffort-pod7ba23808_af9a_4a78_9706_b3111ae46569.slice - libcontainer container kubepods-besteffort-pod7ba23808_af9a_4a78_9706_b3111ae46569.slice. Oct 13 05:56:09.613971 kubelet[2723]: I1013 05:56:09.613882 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ba23808-af9a-4a78-9706-b3111ae46569-kube-proxy\") pod \"kube-proxy-mj2wp\" (UID: \"7ba23808-af9a-4a78-9706-b3111ae46569\") " pod="kube-system/kube-proxy-mj2wp" Oct 13 05:56:09.613971 kubelet[2723]: I1013 05:56:09.613956 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ba23808-af9a-4a78-9706-b3111ae46569-xtables-lock\") pod \"kube-proxy-mj2wp\" (UID: \"7ba23808-af9a-4a78-9706-b3111ae46569\") " pod="kube-system/kube-proxy-mj2wp" Oct 13 05:56:09.613971 kubelet[2723]: I1013 05:56:09.613976 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ba23808-af9a-4a78-9706-b3111ae46569-lib-modules\") pod \"kube-proxy-mj2wp\" (UID: \"7ba23808-af9a-4a78-9706-b3111ae46569\") " pod="kube-system/kube-proxy-mj2wp" Oct 13 05:56:09.614216 kubelet[2723]: I1013 05:56:09.614055 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdv4f\" (UniqueName: \"kubernetes.io/projected/7ba23808-af9a-4a78-9706-b3111ae46569-kube-api-access-pdv4f\") pod \"kube-proxy-mj2wp\" (UID: \"7ba23808-af9a-4a78-9706-b3111ae46569\") " pod="kube-system/kube-proxy-mj2wp" Oct 13 05:56:09.843673 containerd[1568]: time="2025-10-13T05:56:09.843622097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mj2wp,Uid:7ba23808-af9a-4a78-9706-b3111ae46569,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:09.866055 containerd[1568]: time="2025-10-13T05:56:09.865975451Z" level=info msg="connecting to shim aee9fb1d22c2f02469e4420f370b3b6792b9975c0d19093eab9e58a3162899ef" address="unix:///run/containerd/s/c1f1dcb21f821573cf06fc6dc1e4e65c41e1df154f707b94b1042e485a941278" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:09.904949 systemd[1]: Started cri-containerd-aee9fb1d22c2f02469e4420f370b3b6792b9975c0d19093eab9e58a3162899ef.scope - libcontainer container aee9fb1d22c2f02469e4420f370b3b6792b9975c0d19093eab9e58a3162899ef. Oct 13 05:56:09.911870 systemd[1]: Created slice kubepods-besteffort-podb71257b5_c102_408e_a3b4_334feb9ed024.slice - libcontainer container kubepods-besteffort-podb71257b5_c102_408e_a3b4_334feb9ed024.slice. Oct 13 05:56:09.916864 kubelet[2723]: I1013 05:56:09.916813 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8srj\" (UniqueName: \"kubernetes.io/projected/b71257b5-c102-408e-a3b4-334feb9ed024-kube-api-access-z8srj\") pod \"tigera-operator-db78d5bd4-m4xtr\" (UID: \"b71257b5-c102-408e-a3b4-334feb9ed024\") " pod="tigera-operator/tigera-operator-db78d5bd4-m4xtr" Oct 13 05:56:09.916864 kubelet[2723]: I1013 05:56:09.916861 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b71257b5-c102-408e-a3b4-334feb9ed024-var-lib-calico\") pod \"tigera-operator-db78d5bd4-m4xtr\" (UID: \"b71257b5-c102-408e-a3b4-334feb9ed024\") " pod="tigera-operator/tigera-operator-db78d5bd4-m4xtr" Oct 13 05:56:09.947996 containerd[1568]: time="2025-10-13T05:56:09.947916856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mj2wp,Uid:7ba23808-af9a-4a78-9706-b3111ae46569,Namespace:kube-system,Attempt:0,} returns sandbox id \"aee9fb1d22c2f02469e4420f370b3b6792b9975c0d19093eab9e58a3162899ef\"" Oct 13 05:56:09.956444 containerd[1568]: time="2025-10-13T05:56:09.956384886Z" level=info msg="CreateContainer within sandbox \"aee9fb1d22c2f02469e4420f370b3b6792b9975c0d19093eab9e58a3162899ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 13 05:56:09.968635 containerd[1568]: time="2025-10-13T05:56:09.968575225Z" level=info msg="Container 1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:09.973007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548656922.mount: Deactivated successfully. Oct 13 05:56:09.977387 containerd[1568]: time="2025-10-13T05:56:09.977336732Z" level=info msg="CreateContainer within sandbox \"aee9fb1d22c2f02469e4420f370b3b6792b9975c0d19093eab9e58a3162899ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f\"" Oct 13 05:56:09.977929 containerd[1568]: time="2025-10-13T05:56:09.977894359Z" level=info msg="StartContainer for \"1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f\"" Oct 13 05:56:09.979633 containerd[1568]: time="2025-10-13T05:56:09.979607368Z" level=info msg="connecting to shim 1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f" address="unix:///run/containerd/s/c1f1dcb21f821573cf06fc6dc1e4e65c41e1df154f707b94b1042e485a941278" protocol=ttrpc version=3 Oct 13 05:56:10.000873 systemd[1]: Started cri-containerd-1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f.scope - libcontainer container 1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f. Oct 13 05:56:10.046382 containerd[1568]: time="2025-10-13T05:56:10.046328646Z" level=info msg="StartContainer for \"1d10f0cbef01c39cb1305327fdf4defe49fbf30f904276eaa05766bc3d16887f\" returns successfully" Oct 13 05:56:10.218913 containerd[1568]: time="2025-10-13T05:56:10.218765055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-m4xtr,Uid:b71257b5-c102-408e-a3b4-334feb9ed024,Namespace:tigera-operator,Attempt:0,}" Oct 13 05:56:10.239748 containerd[1568]: time="2025-10-13T05:56:10.239683149Z" level=info msg="connecting to shim 5ef8c85e3e022eadabce4a304a73d4acf55786c134799f4fad3a24703ac22a3b" address="unix:///run/containerd/s/ffa6b4647fb77c41b876ce487c734fe9eb575ba1c30a46dec352d54cce9bb804" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:10.270285 systemd[1]: Started cri-containerd-5ef8c85e3e022eadabce4a304a73d4acf55786c134799f4fad3a24703ac22a3b.scope - libcontainer container 5ef8c85e3e022eadabce4a304a73d4acf55786c134799f4fad3a24703ac22a3b. Oct 13 05:56:10.322571 containerd[1568]: time="2025-10-13T05:56:10.322526223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-db78d5bd4-m4xtr,Uid:b71257b5-c102-408e-a3b4-334feb9ed024,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5ef8c85e3e022eadabce4a304a73d4acf55786c134799f4fad3a24703ac22a3b\"" Oct 13 05:56:10.324541 containerd[1568]: time="2025-10-13T05:56:10.324477781Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Oct 13 05:56:10.826756 kubelet[2723]: I1013 05:56:10.826642 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mj2wp" podStartSLOduration=1.826616439 podStartE2EDuration="1.826616439s" podCreationTimestamp="2025-10-13 05:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:10.826392705 +0000 UTC m=+7.135350269" watchObservedRunningTime="2025-10-13 05:56:10.826616439 +0000 UTC m=+7.135574003" Oct 13 05:56:10.850842 update_engine[1549]: I20251013 05:56:10.850766 1549 update_attempter.cc:509] Updating boot flags... Oct 13 05:56:11.788115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584871162.mount: Deactivated successfully. Oct 13 05:56:12.389982 containerd[1568]: time="2025-10-13T05:56:12.389925744Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:12.390771 containerd[1568]: time="2025-10-13T05:56:12.390742791Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Oct 13 05:56:12.391827 containerd[1568]: time="2025-10-13T05:56:12.391804310Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:12.393815 containerd[1568]: time="2025-10-13T05:56:12.393786562Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:12.394362 containerd[1568]: time="2025-10-13T05:56:12.394329810Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.06982055s" Oct 13 05:56:12.394362 containerd[1568]: time="2025-10-13T05:56:12.394357292Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Oct 13 05:56:12.398192 containerd[1568]: time="2025-10-13T05:56:12.398132778Z" level=info msg="CreateContainer within sandbox \"5ef8c85e3e022eadabce4a304a73d4acf55786c134799f4fad3a24703ac22a3b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 13 05:56:12.408010 containerd[1568]: time="2025-10-13T05:56:12.407954398Z" level=info msg="Container b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:12.412479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602392812.mount: Deactivated successfully. Oct 13 05:56:12.416871 containerd[1568]: time="2025-10-13T05:56:12.416811372Z" level=info msg="CreateContainer within sandbox \"5ef8c85e3e022eadabce4a304a73d4acf55786c134799f4fad3a24703ac22a3b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2\"" Oct 13 05:56:12.418710 containerd[1568]: time="2025-10-13T05:56:12.417436675Z" level=info msg="StartContainer for \"b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2\"" Oct 13 05:56:12.418710 containerd[1568]: time="2025-10-13T05:56:12.418541026Z" level=info msg="connecting to shim b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2" address="unix:///run/containerd/s/ffa6b4647fb77c41b876ce487c734fe9eb575ba1c30a46dec352d54cce9bb804" protocol=ttrpc version=3 Oct 13 05:56:12.482841 systemd[1]: Started cri-containerd-b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2.scope - libcontainer container b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2. Oct 13 05:56:12.515595 containerd[1568]: time="2025-10-13T05:56:12.515543447Z" level=info msg="StartContainer for \"b889a6b0ac5b12bdb881e7723b63bf83445fd5513e567c81bcbf005c96ffb2f2\" returns successfully" Oct 13 05:56:17.840109 kubelet[2723]: I1013 05:56:17.839168 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-db78d5bd4-m4xtr" podStartSLOduration=6.768382539 podStartE2EDuration="8.83914852s" podCreationTimestamp="2025-10-13 05:56:09 +0000 UTC" firstStartedPulling="2025-10-13 05:56:10.324131394 +0000 UTC m=+6.633088958" lastFinishedPulling="2025-10-13 05:56:12.394897385 +0000 UTC m=+8.703854939" observedRunningTime="2025-10-13 05:56:12.831376588 +0000 UTC m=+9.140334153" watchObservedRunningTime="2025-10-13 05:56:17.83914852 +0000 UTC m=+14.148106084" Oct 13 05:56:17.870393 sudo[1780]: pam_unix(sudo:session): session closed for user root Oct 13 05:56:17.872364 sshd[1779]: Connection closed by 10.0.0.1 port 38924 Oct 13 05:56:17.875383 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:17.885751 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Oct 13 05:56:17.887431 systemd[1]: sshd@6-10.0.0.145:22-10.0.0.1:38924.service: Deactivated successfully. Oct 13 05:56:17.889650 systemd[1]: session-7.scope: Deactivated successfully. Oct 13 05:56:17.891308 systemd[1]: session-7.scope: Consumed 7.031s CPU time, 232.5M memory peak. Oct 13 05:56:17.895267 systemd-logind[1543]: Removed session 7. Oct 13 05:56:20.317940 systemd[1]: Created slice kubepods-besteffort-pod3f5e3207_0f7d_41ed_9d8b_92b2807cd5d9.slice - libcontainer container kubepods-besteffort-pod3f5e3207_0f7d_41ed_9d8b_92b2807cd5d9.slice. Oct 13 05:56:20.479681 kubelet[2723]: I1013 05:56:20.479613 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh5d9\" (UniqueName: \"kubernetes.io/projected/3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9-kube-api-access-lh5d9\") pod \"calico-typha-5974f6b75c-xgvdz\" (UID: \"3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9\") " pod="calico-system/calico-typha-5974f6b75c-xgvdz" Oct 13 05:56:20.480158 kubelet[2723]: I1013 05:56:20.479770 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9-typha-certs\") pod \"calico-typha-5974f6b75c-xgvdz\" (UID: \"3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9\") " pod="calico-system/calico-typha-5974f6b75c-xgvdz" Oct 13 05:56:20.480158 kubelet[2723]: I1013 05:56:20.479838 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9-tigera-ca-bundle\") pod \"calico-typha-5974f6b75c-xgvdz\" (UID: \"3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9\") " pod="calico-system/calico-typha-5974f6b75c-xgvdz" Oct 13 05:56:20.595403 systemd[1]: Created slice kubepods-besteffort-pod99d11736_f0f8_428c_92e4_941a02783652.slice - libcontainer container kubepods-besteffort-pod99d11736_f0f8_428c_92e4_941a02783652.slice. Oct 13 05:56:20.630433 containerd[1568]: time="2025-10-13T05:56:20.630359446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5974f6b75c-xgvdz,Uid:3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:20.676290 containerd[1568]: time="2025-10-13T05:56:20.676190895Z" level=info msg="connecting to shim a6d6d9cd0b5ed195b3de373723c4de013d469780ed11e21576f3d67c459316e5" address="unix:///run/containerd/s/bfb3879ea2bbc7bab75b38fc5733fae1a34f511d38ce90decf1ecbd786e2cb63" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:20.682544 kubelet[2723]: I1013 05:56:20.682049 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/99d11736-f0f8-428c-92e4-941a02783652-node-certs\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682544 kubelet[2723]: I1013 05:56:20.682097 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-cni-net-dir\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682544 kubelet[2723]: I1013 05:56:20.682109 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-policysync\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682544 kubelet[2723]: I1013 05:56:20.682124 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-var-run-calico\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682544 kubelet[2723]: I1013 05:56:20.682196 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwwpp\" (UniqueName: \"kubernetes.io/projected/99d11736-f0f8-428c-92e4-941a02783652-kube-api-access-lwwpp\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682746 kubelet[2723]: I1013 05:56:20.682280 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-lib-modules\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682746 kubelet[2723]: I1013 05:56:20.682297 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/99d11736-f0f8-428c-92e4-941a02783652-tigera-ca-bundle\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682746 kubelet[2723]: I1013 05:56:20.682315 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-cni-bin-dir\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682746 kubelet[2723]: I1013 05:56:20.682385 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-cni-log-dir\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682746 kubelet[2723]: I1013 05:56:20.682434 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-var-lib-calico\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682850 kubelet[2723]: I1013 05:56:20.682468 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-xtables-lock\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.682850 kubelet[2723]: I1013 05:56:20.682494 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/99d11736-f0f8-428c-92e4-941a02783652-flexvol-driver-host\") pod \"calico-node-nmqq5\" (UID: \"99d11736-f0f8-428c-92e4-941a02783652\") " pod="calico-system/calico-node-nmqq5" Oct 13 05:56:20.705953 systemd[1]: Started cri-containerd-a6d6d9cd0b5ed195b3de373723c4de013d469780ed11e21576f3d67c459316e5.scope - libcontainer container a6d6d9cd0b5ed195b3de373723c4de013d469780ed11e21576f3d67c459316e5. Oct 13 05:56:20.760560 containerd[1568]: time="2025-10-13T05:56:20.760503294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5974f6b75c-xgvdz,Uid:3f5e3207-0f7d-41ed-9d8b-92b2807cd5d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6d6d9cd0b5ed195b3de373723c4de013d469780ed11e21576f3d67c459316e5\"" Oct 13 05:56:20.762415 containerd[1568]: time="2025-10-13T05:56:20.762370805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Oct 13 05:56:20.788958 kubelet[2723]: E1013 05:56:20.788907 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.789220 kubelet[2723]: W1013 05:56:20.789034 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.789220 kubelet[2723]: E1013 05:56:20.789067 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.793366 kubelet[2723]: E1013 05:56:20.793299 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.793427 kubelet[2723]: W1013 05:56:20.793355 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.793427 kubelet[2723]: E1013 05:56:20.793403 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.835738 kubelet[2723]: E1013 05:56:20.835376 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p5bx" podUID="83ab7fef-df4c-4467-a7d6-2fb6be70aa2c" Oct 13 05:56:20.906493 kubelet[2723]: E1013 05:56:20.906369 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.906718 kubelet[2723]: W1013 05:56:20.906628 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.906718 kubelet[2723]: E1013 05:56:20.906657 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.906986 kubelet[2723]: E1013 05:56:20.906975 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.907087 kubelet[2723]: W1013 05:56:20.907035 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.907087 kubelet[2723]: E1013 05:56:20.907047 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.907422 kubelet[2723]: E1013 05:56:20.907410 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.908187 kubelet[2723]: W1013 05:56:20.907474 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.908187 kubelet[2723]: E1013 05:56:20.907486 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.908585 kubelet[2723]: E1013 05:56:20.908542 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.908585 kubelet[2723]: W1013 05:56:20.908556 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.908585 kubelet[2723]: E1013 05:56:20.908565 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.909041 kubelet[2723]: E1013 05:56:20.908948 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.909041 kubelet[2723]: W1013 05:56:20.908959 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.909041 kubelet[2723]: E1013 05:56:20.908968 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.909198 kubelet[2723]: E1013 05:56:20.909187 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.909270 kubelet[2723]: W1013 05:56:20.909257 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.909345 kubelet[2723]: E1013 05:56:20.909329 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.910681 kubelet[2723]: E1013 05:56:20.910667 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.910890 kubelet[2723]: W1013 05:56:20.910786 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.910890 kubelet[2723]: E1013 05:56:20.910799 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.911017 kubelet[2723]: E1013 05:56:20.911006 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.911069 kubelet[2723]: W1013 05:56:20.911059 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.911120 kubelet[2723]: E1013 05:56:20.911110 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.911718 kubelet[2723]: E1013 05:56:20.911622 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.911718 kubelet[2723]: W1013 05:56:20.911635 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.911718 kubelet[2723]: E1013 05:56:20.911645 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.912091 kubelet[2723]: E1013 05:56:20.912037 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.912091 kubelet[2723]: W1013 05:56:20.912048 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.912091 kubelet[2723]: E1013 05:56:20.912057 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.912460 kubelet[2723]: E1013 05:56:20.912392 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.912460 kubelet[2723]: W1013 05:56:20.912406 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.912460 kubelet[2723]: E1013 05:56:20.912417 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.912836 kubelet[2723]: E1013 05:56:20.912776 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.912836 kubelet[2723]: W1013 05:56:20.912787 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.912836 kubelet[2723]: E1013 05:56:20.912795 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.913164 kubelet[2723]: E1013 05:56:20.913112 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.913164 kubelet[2723]: W1013 05:56:20.913122 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.913164 kubelet[2723]: E1013 05:56:20.913132 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.913491 kubelet[2723]: E1013 05:56:20.913441 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.913491 kubelet[2723]: W1013 05:56:20.913451 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.913491 kubelet[2723]: E1013 05:56:20.913459 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.913840 kubelet[2723]: E1013 05:56:20.913786 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.913840 kubelet[2723]: W1013 05:56:20.913797 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.913840 kubelet[2723]: E1013 05:56:20.913806 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.914126 kubelet[2723]: E1013 05:56:20.914063 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.914126 kubelet[2723]: W1013 05:56:20.914072 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.914126 kubelet[2723]: E1013 05:56:20.914082 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.915098 containerd[1568]: time="2025-10-13T05:56:20.914959074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nmqq5,Uid:99d11736-f0f8-428c-92e4-941a02783652,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:20.915146 kubelet[2723]: E1013 05:56:20.915009 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.915146 kubelet[2723]: W1013 05:56:20.915016 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.915146 kubelet[2723]: E1013 05:56:20.915026 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.915850 kubelet[2723]: E1013 05:56:20.915739 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.915850 kubelet[2723]: W1013 05:56:20.915751 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.915850 kubelet[2723]: E1013 05:56:20.915772 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.916132 kubelet[2723]: E1013 05:56:20.916078 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.916132 kubelet[2723]: W1013 05:56:20.916088 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.916132 kubelet[2723]: E1013 05:56:20.916097 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.916544 kubelet[2723]: E1013 05:56:20.916473 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.916544 kubelet[2723]: W1013 05:56:20.916483 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.916544 kubelet[2723]: E1013 05:56:20.916492 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.943835 containerd[1568]: time="2025-10-13T05:56:20.943783235Z" level=info msg="connecting to shim 91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928" address="unix:///run/containerd/s/2ee65ab620b74c092698a83a0b1db383621c9ab35a206259331fb2e3ff5bf28a" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:20.980001 systemd[1]: Started cri-containerd-91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928.scope - libcontainer container 91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928. Oct 13 05:56:20.985525 kubelet[2723]: E1013 05:56:20.985495 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.985525 kubelet[2723]: W1013 05:56:20.985517 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.985629 kubelet[2723]: E1013 05:56:20.985537 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.985629 kubelet[2723]: I1013 05:56:20.985561 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/83ab7fef-df4c-4467-a7d6-2fb6be70aa2c-varrun\") pod \"csi-node-driver-8p5bx\" (UID: \"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c\") " pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:20.985831 kubelet[2723]: E1013 05:56:20.985815 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.985874 kubelet[2723]: W1013 05:56:20.985826 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.985874 kubelet[2723]: E1013 05:56:20.985844 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.985874 kubelet[2723]: I1013 05:56:20.985871 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/83ab7fef-df4c-4467-a7d6-2fb6be70aa2c-registration-dir\") pod \"csi-node-driver-8p5bx\" (UID: \"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c\") " pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:20.986160 kubelet[2723]: E1013 05:56:20.986143 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.986160 kubelet[2723]: W1013 05:56:20.986155 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.986224 kubelet[2723]: E1013 05:56:20.986163 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.986224 kubelet[2723]: I1013 05:56:20.986199 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmh8x\" (UniqueName: \"kubernetes.io/projected/83ab7fef-df4c-4467-a7d6-2fb6be70aa2c-kube-api-access-bmh8x\") pod \"csi-node-driver-8p5bx\" (UID: \"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c\") " pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:20.986498 kubelet[2723]: E1013 05:56:20.986482 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.986498 kubelet[2723]: W1013 05:56:20.986494 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.986565 kubelet[2723]: E1013 05:56:20.986503 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.986565 kubelet[2723]: I1013 05:56:20.986532 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/83ab7fef-df4c-4467-a7d6-2fb6be70aa2c-socket-dir\") pod \"csi-node-driver-8p5bx\" (UID: \"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c\") " pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:20.986868 kubelet[2723]: E1013 05:56:20.986844 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.986930 kubelet[2723]: W1013 05:56:20.986869 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.986930 kubelet[2723]: E1013 05:56:20.986879 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.986930 kubelet[2723]: I1013 05:56:20.986895 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/83ab7fef-df4c-4467-a7d6-2fb6be70aa2c-kubelet-dir\") pod \"csi-node-driver-8p5bx\" (UID: \"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c\") " pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:20.987125 kubelet[2723]: E1013 05:56:20.987098 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.987125 kubelet[2723]: W1013 05:56:20.987111 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.987125 kubelet[2723]: E1013 05:56:20.987119 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.987374 kubelet[2723]: E1013 05:56:20.987357 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.987374 kubelet[2723]: W1013 05:56:20.987369 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.987428 kubelet[2723]: E1013 05:56:20.987378 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.987609 kubelet[2723]: E1013 05:56:20.987593 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.987609 kubelet[2723]: W1013 05:56:20.987604 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.987664 kubelet[2723]: E1013 05:56:20.987612 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.987853 kubelet[2723]: E1013 05:56:20.987836 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.987884 kubelet[2723]: W1013 05:56:20.987857 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.987884 kubelet[2723]: E1013 05:56:20.987865 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.988082 kubelet[2723]: E1013 05:56:20.988066 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.988082 kubelet[2723]: W1013 05:56:20.988076 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.988134 kubelet[2723]: E1013 05:56:20.988084 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.988323 kubelet[2723]: E1013 05:56:20.988307 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.988354 kubelet[2723]: W1013 05:56:20.988327 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.988354 kubelet[2723]: E1013 05:56:20.988335 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.988720 kubelet[2723]: E1013 05:56:20.988600 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.988720 kubelet[2723]: W1013 05:56:20.988627 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.988720 kubelet[2723]: E1013 05:56:20.988656 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.988985 kubelet[2723]: E1013 05:56:20.988954 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.988985 kubelet[2723]: W1013 05:56:20.988968 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.988985 kubelet[2723]: E1013 05:56:20.988976 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.989226 kubelet[2723]: E1013 05:56:20.989209 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.989226 kubelet[2723]: W1013 05:56:20.989220 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.989302 kubelet[2723]: E1013 05:56:20.989228 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:20.989444 kubelet[2723]: E1013 05:56:20.989429 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:20.989444 kubelet[2723]: W1013 05:56:20.989440 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:20.989500 kubelet[2723]: E1013 05:56:20.989447 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.011056 containerd[1568]: time="2025-10-13T05:56:21.011008427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nmqq5,Uid:99d11736-f0f8-428c-92e4-941a02783652,Namespace:calico-system,Attempt:0,} returns sandbox id \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\"" Oct 13 05:56:21.087801 kubelet[2723]: E1013 05:56:21.087747 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.087801 kubelet[2723]: W1013 05:56:21.087775 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.087801 kubelet[2723]: E1013 05:56:21.087799 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.088181 kubelet[2723]: E1013 05:56:21.088145 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.088220 kubelet[2723]: W1013 05:56:21.088179 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.088258 kubelet[2723]: E1013 05:56:21.088222 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.088522 kubelet[2723]: E1013 05:56:21.088502 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.088522 kubelet[2723]: W1013 05:56:21.088517 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.088607 kubelet[2723]: E1013 05:56:21.088528 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.088805 kubelet[2723]: E1013 05:56:21.088759 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.088805 kubelet[2723]: W1013 05:56:21.088786 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.088805 kubelet[2723]: E1013 05:56:21.088798 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.089122 kubelet[2723]: E1013 05:56:21.089089 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.089163 kubelet[2723]: W1013 05:56:21.089117 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.089163 kubelet[2723]: E1013 05:56:21.089151 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.089540 kubelet[2723]: E1013 05:56:21.089501 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.089540 kubelet[2723]: W1013 05:56:21.089523 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.089540 kubelet[2723]: E1013 05:56:21.089533 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.089797 kubelet[2723]: E1013 05:56:21.089770 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.089797 kubelet[2723]: W1013 05:56:21.089791 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.089861 kubelet[2723]: E1013 05:56:21.089801 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.090130 kubelet[2723]: E1013 05:56:21.090099 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.090130 kubelet[2723]: W1013 05:56:21.090115 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.090130 kubelet[2723]: E1013 05:56:21.090126 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.090468 kubelet[2723]: E1013 05:56:21.090442 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.090468 kubelet[2723]: W1013 05:56:21.090456 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.090535 kubelet[2723]: E1013 05:56:21.090469 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.090730 kubelet[2723]: E1013 05:56:21.090712 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.090730 kubelet[2723]: W1013 05:56:21.090725 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.090811 kubelet[2723]: E1013 05:56:21.090736 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.091017 kubelet[2723]: E1013 05:56:21.091000 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.091017 kubelet[2723]: W1013 05:56:21.091011 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.091096 kubelet[2723]: E1013 05:56:21.091022 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.091301 kubelet[2723]: E1013 05:56:21.091284 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.091301 kubelet[2723]: W1013 05:56:21.091295 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.091379 kubelet[2723]: E1013 05:56:21.091306 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.091545 kubelet[2723]: E1013 05:56:21.091528 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.091545 kubelet[2723]: W1013 05:56:21.091539 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.091610 kubelet[2723]: E1013 05:56:21.091549 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.091786 kubelet[2723]: E1013 05:56:21.091770 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.091786 kubelet[2723]: W1013 05:56:21.091781 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.091878 kubelet[2723]: E1013 05:56:21.091795 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.092014 kubelet[2723]: E1013 05:56:21.091997 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.092014 kubelet[2723]: W1013 05:56:21.092008 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.092076 kubelet[2723]: E1013 05:56:21.092018 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.092246 kubelet[2723]: E1013 05:56:21.092228 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.092246 kubelet[2723]: W1013 05:56:21.092239 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.092327 kubelet[2723]: E1013 05:56:21.092249 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.092470 kubelet[2723]: E1013 05:56:21.092453 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.092470 kubelet[2723]: W1013 05:56:21.092464 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.092537 kubelet[2723]: E1013 05:56:21.092474 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.092779 kubelet[2723]: E1013 05:56:21.092760 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.092779 kubelet[2723]: W1013 05:56:21.092773 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.092867 kubelet[2723]: E1013 05:56:21.092790 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.093011 kubelet[2723]: E1013 05:56:21.092993 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.093011 kubelet[2723]: W1013 05:56:21.093004 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.093011 kubelet[2723]: E1013 05:56:21.093013 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.093225 kubelet[2723]: E1013 05:56:21.093197 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.093225 kubelet[2723]: W1013 05:56:21.093219 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.093307 kubelet[2723]: E1013 05:56:21.093228 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.093468 kubelet[2723]: E1013 05:56:21.093451 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.093468 kubelet[2723]: W1013 05:56:21.093461 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.093468 kubelet[2723]: E1013 05:56:21.093471 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.093679 kubelet[2723]: E1013 05:56:21.093663 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.093679 kubelet[2723]: W1013 05:56:21.093674 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.093763 kubelet[2723]: E1013 05:56:21.093682 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.093952 kubelet[2723]: E1013 05:56:21.093933 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.093952 kubelet[2723]: W1013 05:56:21.093943 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.094009 kubelet[2723]: E1013 05:56:21.093952 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.094150 kubelet[2723]: E1013 05:56:21.094134 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.094150 kubelet[2723]: W1013 05:56:21.094145 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.094214 kubelet[2723]: E1013 05:56:21.094153 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.094378 kubelet[2723]: E1013 05:56:21.094361 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.094378 kubelet[2723]: W1013 05:56:21.094372 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.094427 kubelet[2723]: E1013 05:56:21.094381 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:21.103342 kubelet[2723]: E1013 05:56:21.103309 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:21.103342 kubelet[2723]: W1013 05:56:21.103326 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:21.103342 kubelet[2723]: E1013 05:56:21.103341 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.397300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770554813.mount: Deactivated successfully. Oct 13 05:56:22.727321 containerd[1568]: time="2025-10-13T05:56:22.727190655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:22.728049 containerd[1568]: time="2025-10-13T05:56:22.728028393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Oct 13 05:56:22.729132 containerd[1568]: time="2025-10-13T05:56:22.729110051Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:22.730932 containerd[1568]: time="2025-10-13T05:56:22.730893753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:22.731388 containerd[1568]: time="2025-10-13T05:56:22.731356835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 1.968949761s" Oct 13 05:56:22.731388 containerd[1568]: time="2025-10-13T05:56:22.731384958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Oct 13 05:56:22.732629 containerd[1568]: time="2025-10-13T05:56:22.732520588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Oct 13 05:56:22.743008 containerd[1568]: time="2025-10-13T05:56:22.742964868Z" level=info msg="CreateContainer within sandbox \"a6d6d9cd0b5ed195b3de373723c4de013d469780ed11e21576f3d67c459316e5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 13 05:56:22.751598 containerd[1568]: time="2025-10-13T05:56:22.751544424Z" level=info msg="Container a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:22.760287 containerd[1568]: time="2025-10-13T05:56:22.760243626Z" level=info msg="CreateContainer within sandbox \"a6d6d9cd0b5ed195b3de373723c4de013d469780ed11e21576f3d67c459316e5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d\"" Oct 13 05:56:22.760791 containerd[1568]: time="2025-10-13T05:56:22.760755742Z" level=info msg="StartContainer for \"a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d\"" Oct 13 05:56:22.761744 containerd[1568]: time="2025-10-13T05:56:22.761716532Z" level=info msg="connecting to shim a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d" address="unix:///run/containerd/s/bfb3879ea2bbc7bab75b38fc5733fae1a34f511d38ce90decf1ecbd786e2cb63" protocol=ttrpc version=3 Oct 13 05:56:22.784826 systemd[1]: Started cri-containerd-a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d.scope - libcontainer container a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d. Oct 13 05:56:22.785885 kubelet[2723]: E1013 05:56:22.785850 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p5bx" podUID="83ab7fef-df4c-4467-a7d6-2fb6be70aa2c" Oct 13 05:56:22.840857 containerd[1568]: time="2025-10-13T05:56:22.840805750Z" level=info msg="StartContainer for \"a520c50e70d9f1bc0026f55bc4862bfa6d54c10db83c5f59a13f8c45219ae44d\" returns successfully" Oct 13 05:56:22.929436 kubelet[2723]: E1013 05:56:22.929399 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.929771 kubelet[2723]: W1013 05:56:22.929606 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.930573 kubelet[2723]: E1013 05:56:22.930509 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.930933 kubelet[2723]: E1013 05:56:22.930873 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.930933 kubelet[2723]: W1013 05:56:22.930885 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.930933 kubelet[2723]: E1013 05:56:22.930895 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.931279 kubelet[2723]: E1013 05:56:22.931212 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.931279 kubelet[2723]: W1013 05:56:22.931222 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.931279 kubelet[2723]: E1013 05:56:22.931231 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.931749 kubelet[2723]: E1013 05:56:22.931705 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.931749 kubelet[2723]: W1013 05:56:22.931717 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.931749 kubelet[2723]: E1013 05:56:22.931726 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.932108 kubelet[2723]: E1013 05:56:22.932071 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.932108 kubelet[2723]: W1013 05:56:22.932082 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.932289 kubelet[2723]: E1013 05:56:22.932237 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.932675 kubelet[2723]: E1013 05:56:22.932604 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.932785 kubelet[2723]: W1013 05:56:22.932718 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.932785 kubelet[2723]: E1013 05:56:22.932731 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.933417 kubelet[2723]: E1013 05:56:22.933349 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.933417 kubelet[2723]: W1013 05:56:22.933361 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.933417 kubelet[2723]: E1013 05:56:22.933370 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.933829 kubelet[2723]: E1013 05:56:22.933769 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.933829 kubelet[2723]: W1013 05:56:22.933783 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.933829 kubelet[2723]: E1013 05:56:22.933792 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.934357 kubelet[2723]: E1013 05:56:22.934212 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.934357 kubelet[2723]: W1013 05:56:22.934224 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.934357 kubelet[2723]: E1013 05:56:22.934234 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.934911 kubelet[2723]: E1013 05:56:22.934885 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.934911 kubelet[2723]: W1013 05:56:22.934897 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.934911 kubelet[2723]: E1013 05:56:22.934909 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.935280 kubelet[2723]: E1013 05:56:22.935264 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.935280 kubelet[2723]: W1013 05:56:22.935276 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.935352 kubelet[2723]: E1013 05:56:22.935286 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.935509 kubelet[2723]: E1013 05:56:22.935494 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.935509 kubelet[2723]: W1013 05:56:22.935505 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.935565 kubelet[2723]: E1013 05:56:22.935513 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.935794 kubelet[2723]: E1013 05:56:22.935775 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.935794 kubelet[2723]: W1013 05:56:22.935789 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.935794 kubelet[2723]: E1013 05:56:22.935799 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.936001 kubelet[2723]: E1013 05:56:22.935985 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.936001 kubelet[2723]: W1013 05:56:22.935996 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.936052 kubelet[2723]: E1013 05:56:22.936005 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:22.936465 kubelet[2723]: E1013 05:56:22.936446 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:22.936465 kubelet[2723]: W1013 05:56:22.936461 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:22.936547 kubelet[2723]: E1013 05:56:22.936473 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.003189 kubelet[2723]: E1013 05:56:23.003050 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.003189 kubelet[2723]: W1013 05:56:23.003072 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.003189 kubelet[2723]: E1013 05:56:23.003093 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.003927 kubelet[2723]: E1013 05:56:23.003837 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.003927 kubelet[2723]: W1013 05:56:23.003853 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.003927 kubelet[2723]: E1013 05:56:23.003863 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.005165 kubelet[2723]: E1013 05:56:23.005148 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.005165 kubelet[2723]: W1013 05:56:23.005160 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.005270 kubelet[2723]: E1013 05:56:23.005170 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.006106 kubelet[2723]: E1013 05:56:23.006079 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.006149 kubelet[2723]: W1013 05:56:23.006107 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.006275 kubelet[2723]: E1013 05:56:23.006143 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.006518 kubelet[2723]: E1013 05:56:23.006504 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.006552 kubelet[2723]: W1013 05:56:23.006518 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.006552 kubelet[2723]: E1013 05:56:23.006530 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.006806 kubelet[2723]: E1013 05:56:23.006788 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.006855 kubelet[2723]: W1013 05:56:23.006831 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.006855 kubelet[2723]: E1013 05:56:23.006842 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.007089 kubelet[2723]: E1013 05:56:23.007076 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.007089 kubelet[2723]: W1013 05:56:23.007086 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.007158 kubelet[2723]: E1013 05:56:23.007095 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.007337 kubelet[2723]: E1013 05:56:23.007323 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.007337 kubelet[2723]: W1013 05:56:23.007334 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.007404 kubelet[2723]: E1013 05:56:23.007344 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.007556 kubelet[2723]: E1013 05:56:23.007541 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.007556 kubelet[2723]: W1013 05:56:23.007551 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.007614 kubelet[2723]: E1013 05:56:23.007561 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.008062 kubelet[2723]: E1013 05:56:23.008032 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.008062 kubelet[2723]: W1013 05:56:23.008048 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.008062 kubelet[2723]: E1013 05:56:23.008060 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.008337 kubelet[2723]: E1013 05:56:23.008321 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.008337 kubelet[2723]: W1013 05:56:23.008334 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.008407 kubelet[2723]: E1013 05:56:23.008346 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.008607 kubelet[2723]: E1013 05:56:23.008590 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.008607 kubelet[2723]: W1013 05:56:23.008603 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.008677 kubelet[2723]: E1013 05:56:23.008615 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.009156 kubelet[2723]: E1013 05:56:23.009139 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.009156 kubelet[2723]: W1013 05:56:23.009155 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.009236 kubelet[2723]: E1013 05:56:23.009168 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.009514 kubelet[2723]: E1013 05:56:23.009487 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.009514 kubelet[2723]: W1013 05:56:23.009500 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.009514 kubelet[2723]: E1013 05:56:23.009511 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.010092 kubelet[2723]: E1013 05:56:23.010072 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.010092 kubelet[2723]: W1013 05:56:23.010089 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.010092 kubelet[2723]: E1013 05:56:23.010103 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.010340 kubelet[2723]: E1013 05:56:23.010325 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.010381 kubelet[2723]: W1013 05:56:23.010373 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.010411 kubelet[2723]: E1013 05:56:23.010387 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.010970 kubelet[2723]: E1013 05:56:23.010952 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.010970 kubelet[2723]: W1013 05:56:23.010968 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.011040 kubelet[2723]: E1013 05:56:23.010981 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.011344 kubelet[2723]: E1013 05:56:23.011326 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.011344 kubelet[2723]: W1013 05:56:23.011341 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.011411 kubelet[2723]: E1013 05:56:23.011353 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.855658 kubelet[2723]: I1013 05:56:23.855608 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:23.944492 kubelet[2723]: E1013 05:56:23.944449 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.944492 kubelet[2723]: W1013 05:56:23.944472 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.944492 kubelet[2723]: E1013 05:56:23.944497 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.944887 kubelet[2723]: E1013 05:56:23.944865 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.944887 kubelet[2723]: W1013 05:56:23.944881 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.944974 kubelet[2723]: E1013 05:56:23.944889 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.945169 kubelet[2723]: E1013 05:56:23.945138 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.945169 kubelet[2723]: W1013 05:56:23.945150 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.945169 kubelet[2723]: E1013 05:56:23.945157 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.945357 kubelet[2723]: E1013 05:56:23.945327 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.945357 kubelet[2723]: W1013 05:56:23.945338 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.945357 kubelet[2723]: E1013 05:56:23.945345 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.945523 kubelet[2723]: E1013 05:56:23.945509 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.945523 kubelet[2723]: W1013 05:56:23.945518 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.945574 kubelet[2723]: E1013 05:56:23.945525 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.945701 kubelet[2723]: E1013 05:56:23.945676 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.945701 kubelet[2723]: W1013 05:56:23.945700 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.945755 kubelet[2723]: E1013 05:56:23.945708 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.945874 kubelet[2723]: E1013 05:56:23.945860 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.945874 kubelet[2723]: W1013 05:56:23.945870 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.945921 kubelet[2723]: E1013 05:56:23.945889 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.946054 kubelet[2723]: E1013 05:56:23.946039 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.946054 kubelet[2723]: W1013 05:56:23.946048 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.946103 kubelet[2723]: E1013 05:56:23.946057 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.946288 kubelet[2723]: E1013 05:56:23.946273 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.946288 kubelet[2723]: W1013 05:56:23.946283 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.946333 kubelet[2723]: E1013 05:56:23.946290 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.946455 kubelet[2723]: E1013 05:56:23.946441 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.946455 kubelet[2723]: W1013 05:56:23.946450 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.946504 kubelet[2723]: E1013 05:56:23.946458 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.946806 kubelet[2723]: E1013 05:56:23.946768 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.946837 kubelet[2723]: W1013 05:56:23.946809 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.946876 kubelet[2723]: E1013 05:56:23.946852 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.947257 kubelet[2723]: E1013 05:56:23.947228 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.947257 kubelet[2723]: W1013 05:56:23.947251 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.947312 kubelet[2723]: E1013 05:56:23.947269 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.947547 kubelet[2723]: E1013 05:56:23.947529 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.947577 kubelet[2723]: W1013 05:56:23.947547 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.947577 kubelet[2723]: E1013 05:56:23.947559 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.947806 kubelet[2723]: E1013 05:56:23.947789 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.947806 kubelet[2723]: W1013 05:56:23.947802 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.947872 kubelet[2723]: E1013 05:56:23.947814 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:23.948025 kubelet[2723]: E1013 05:56:23.948008 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:23.948025 kubelet[2723]: W1013 05:56:23.948022 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:23.948075 kubelet[2723]: E1013 05:56:23.948033 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.013565 kubelet[2723]: E1013 05:56:24.013528 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.013565 kubelet[2723]: W1013 05:56:24.013557 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.013845 kubelet[2723]: E1013 05:56:24.013583 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.013917 kubelet[2723]: E1013 05:56:24.013899 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.013917 kubelet[2723]: W1013 05:56:24.013912 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.013987 kubelet[2723]: E1013 05:56:24.013923 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.014238 kubelet[2723]: E1013 05:56:24.014215 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.014238 kubelet[2723]: W1013 05:56:24.014233 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.014319 kubelet[2723]: E1013 05:56:24.014252 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.014471 kubelet[2723]: E1013 05:56:24.014455 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.014471 kubelet[2723]: W1013 05:56:24.014465 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.014471 kubelet[2723]: E1013 05:56:24.014473 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.014673 kubelet[2723]: E1013 05:56:24.014656 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.014673 kubelet[2723]: W1013 05:56:24.014666 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.014673 kubelet[2723]: E1013 05:56:24.014673 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.014916 kubelet[2723]: E1013 05:56:24.014899 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.014916 kubelet[2723]: W1013 05:56:24.014909 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.014916 kubelet[2723]: E1013 05:56:24.014917 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.015361 kubelet[2723]: E1013 05:56:24.015318 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.015361 kubelet[2723]: W1013 05:56:24.015350 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.015436 kubelet[2723]: E1013 05:56:24.015378 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.015619 kubelet[2723]: E1013 05:56:24.015593 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.015619 kubelet[2723]: W1013 05:56:24.015604 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.015619 kubelet[2723]: E1013 05:56:24.015612 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.015824 kubelet[2723]: E1013 05:56:24.015806 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.015824 kubelet[2723]: W1013 05:56:24.015816 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.015824 kubelet[2723]: E1013 05:56:24.015825 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.016015 kubelet[2723]: E1013 05:56:24.015998 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.016015 kubelet[2723]: W1013 05:56:24.016007 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.016015 kubelet[2723]: E1013 05:56:24.016014 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.016270 kubelet[2723]: E1013 05:56:24.016236 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.016270 kubelet[2723]: W1013 05:56:24.016254 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.016270 kubelet[2723]: E1013 05:56:24.016269 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.016587 kubelet[2723]: E1013 05:56:24.016564 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.016587 kubelet[2723]: W1013 05:56:24.016583 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.016660 kubelet[2723]: E1013 05:56:24.016599 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.016955 kubelet[2723]: E1013 05:56:24.016935 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.016955 kubelet[2723]: W1013 05:56:24.016950 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.017061 kubelet[2723]: E1013 05:56:24.016963 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.017194 kubelet[2723]: E1013 05:56:24.017174 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.017194 kubelet[2723]: W1013 05:56:24.017189 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.017268 kubelet[2723]: E1013 05:56:24.017200 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.017417 kubelet[2723]: E1013 05:56:24.017399 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.017417 kubelet[2723]: W1013 05:56:24.017413 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.017489 kubelet[2723]: E1013 05:56:24.017423 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.017672 kubelet[2723]: E1013 05:56:24.017655 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.017672 kubelet[2723]: W1013 05:56:24.017668 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.017776 kubelet[2723]: E1013 05:56:24.017680 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.018015 kubelet[2723]: E1013 05:56:24.017996 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.018015 kubelet[2723]: W1013 05:56:24.018010 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.018089 kubelet[2723]: E1013 05:56:24.018022 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.018236 kubelet[2723]: E1013 05:56:24.018217 2723 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 13 05:56:24.018236 kubelet[2723]: W1013 05:56:24.018230 2723 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 13 05:56:24.018306 kubelet[2723]: E1013 05:56:24.018240 2723 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 13 05:56:24.586100 containerd[1568]: time="2025-10-13T05:56:24.586034655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:24.631922 containerd[1568]: time="2025-10-13T05:56:24.631851346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Oct 13 05:56:24.655522 containerd[1568]: time="2025-10-13T05:56:24.655459485Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:24.733886 containerd[1568]: time="2025-10-13T05:56:24.733816064Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:24.734496 containerd[1568]: time="2025-10-13T05:56:24.734464986Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.001885948s" Oct 13 05:56:24.734496 containerd[1568]: time="2025-10-13T05:56:24.734496064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Oct 13 05:56:24.785924 kubelet[2723]: E1013 05:56:24.785845 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p5bx" podUID="83ab7fef-df4c-4467-a7d6-2fb6be70aa2c" Oct 13 05:56:24.832100 containerd[1568]: time="2025-10-13T05:56:24.831941672Z" level=info msg="CreateContainer within sandbox \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 13 05:56:24.845539 containerd[1568]: time="2025-10-13T05:56:24.845412299Z" level=info msg="Container 4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:25.064623 containerd[1568]: time="2025-10-13T05:56:25.064551185Z" level=info msg="CreateContainer within sandbox \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\"" Oct 13 05:56:25.065324 containerd[1568]: time="2025-10-13T05:56:25.065271170Z" level=info msg="StartContainer for \"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\"" Oct 13 05:56:25.066798 containerd[1568]: time="2025-10-13T05:56:25.066751086Z" level=info msg="connecting to shim 4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053" address="unix:///run/containerd/s/2ee65ab620b74c092698a83a0b1db383621c9ab35a206259331fb2e3ff5bf28a" protocol=ttrpc version=3 Oct 13 05:56:25.095861 systemd[1]: Started cri-containerd-4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053.scope - libcontainer container 4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053. Oct 13 05:56:25.153193 systemd[1]: cri-containerd-4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053.scope: Deactivated successfully. Oct 13 05:56:25.157304 containerd[1568]: time="2025-10-13T05:56:25.157244936Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\" id:\"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\" pid:3455 exited_at:{seconds:1760334985 nanos:156594723}" Oct 13 05:56:25.179759 containerd[1568]: time="2025-10-13T05:56:25.179674113Z" level=info msg="received exit event container_id:\"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\" id:\"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\" pid:3455 exited_at:{seconds:1760334985 nanos:156594723}" Oct 13 05:56:25.181412 containerd[1568]: time="2025-10-13T05:56:25.181363814Z" level=info msg="StartContainer for \"4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053\" returns successfully" Oct 13 05:56:25.206662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b80f40dbb033e37607c3f1c324f08ada3de0d6a6189e661ca43c4241d003053-rootfs.mount: Deactivated successfully. Oct 13 05:56:25.868849 containerd[1568]: time="2025-10-13T05:56:25.868777586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Oct 13 05:56:25.877328 kubelet[2723]: I1013 05:56:25.877163 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5974f6b75c-xgvdz" podStartSLOduration=3.906787404 podStartE2EDuration="5.87714569s" podCreationTimestamp="2025-10-13 05:56:20 +0000 UTC" firstStartedPulling="2025-10-13 05:56:20.761983084 +0000 UTC m=+17.070940648" lastFinishedPulling="2025-10-13 05:56:22.73234137 +0000 UTC m=+19.041298934" observedRunningTime="2025-10-13 05:56:22.867707501 +0000 UTC m=+19.176665075" watchObservedRunningTime="2025-10-13 05:56:25.87714569 +0000 UTC m=+22.186103254" Oct 13 05:56:26.786339 kubelet[2723]: E1013 05:56:26.786286 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p5bx" podUID="83ab7fef-df4c-4467-a7d6-2fb6be70aa2c" Oct 13 05:56:28.786268 kubelet[2723]: E1013 05:56:28.786195 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8p5bx" podUID="83ab7fef-df4c-4467-a7d6-2fb6be70aa2c" Oct 13 05:56:28.988838 containerd[1568]: time="2025-10-13T05:56:28.988758724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:28.989868 containerd[1568]: time="2025-10-13T05:56:28.989790987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Oct 13 05:56:28.991067 containerd[1568]: time="2025-10-13T05:56:28.990991635Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:28.995050 containerd[1568]: time="2025-10-13T05:56:28.995007561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:28.995815 containerd[1568]: time="2025-10-13T05:56:28.995782619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.126945611s" Oct 13 05:56:28.995916 containerd[1568]: time="2025-10-13T05:56:28.995815881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Oct 13 05:56:29.007276 containerd[1568]: time="2025-10-13T05:56:29.007199034Z" level=info msg="CreateContainer within sandbox \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 13 05:56:29.018932 containerd[1568]: time="2025-10-13T05:56:29.018873593Z" level=info msg="Container 174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:29.033032 containerd[1568]: time="2025-10-13T05:56:29.032971059Z" level=info msg="CreateContainer within sandbox \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\"" Oct 13 05:56:29.033763 containerd[1568]: time="2025-10-13T05:56:29.033737331Z" level=info msg="StartContainer for \"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\"" Oct 13 05:56:29.035089 containerd[1568]: time="2025-10-13T05:56:29.035055109Z" level=info msg="connecting to shim 174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f" address="unix:///run/containerd/s/2ee65ab620b74c092698a83a0b1db383621c9ab35a206259331fb2e3ff5bf28a" protocol=ttrpc version=3 Oct 13 05:56:29.060822 systemd[1]: Started cri-containerd-174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f.scope - libcontainer container 174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f. Oct 13 05:56:29.107319 containerd[1568]: time="2025-10-13T05:56:29.107275720Z" level=info msg="StartContainer for \"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\" returns successfully" Oct 13 05:56:30.425745 containerd[1568]: time="2025-10-13T05:56:30.425656846Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 13 05:56:30.428493 systemd[1]: cri-containerd-174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f.scope: Deactivated successfully. Oct 13 05:56:30.429382 systemd[1]: cri-containerd-174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f.scope: Consumed 638ms CPU time, 177.5M memory peak, 4.8M read from disk, 171.3M written to disk. Oct 13 05:56:30.430737 containerd[1568]: time="2025-10-13T05:56:30.430488332Z" level=info msg="received exit event container_id:\"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\" id:\"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\" pid:3517 exited_at:{seconds:1760334990 nanos:430196033}" Oct 13 05:56:30.430737 containerd[1568]: time="2025-10-13T05:56:30.430648564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\" id:\"174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f\" pid:3517 exited_at:{seconds:1760334990 nanos:430196033}" Oct 13 05:56:30.451427 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-174830ee1790627e4e57d351737295748ef590c52fb23c46d0ac9666fb79391f-rootfs.mount: Deactivated successfully. Oct 13 05:56:30.494714 kubelet[2723]: I1013 05:56:30.494639 2723 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 13 05:56:30.908560 systemd[1]: Created slice kubepods-besteffort-pod79a8338b_50b1_414a_904a_09688f2cfcc6.slice - libcontainer container kubepods-besteffort-pod79a8338b_50b1_414a_904a_09688f2cfcc6.slice. Oct 13 05:56:30.917376 systemd[1]: Created slice kubepods-besteffort-pod31c8cd51_c9af_4060_a19c_cb1f745e22dd.slice - libcontainer container kubepods-besteffort-pod31c8cd51_c9af_4060_a19c_cb1f745e22dd.slice. Oct 13 05:56:30.924552 systemd[1]: Created slice kubepods-burstable-pod0de5fe5b_5d3a_4e6d_bc7a_da4c7be000c5.slice - libcontainer container kubepods-burstable-pod0de5fe5b_5d3a_4e6d_bc7a_da4c7be000c5.slice. Oct 13 05:56:30.932219 systemd[1]: Created slice kubepods-besteffort-pod83ab7fef_df4c_4467_a7d6_2fb6be70aa2c.slice - libcontainer container kubepods-besteffort-pod83ab7fef_df4c_4467_a7d6_2fb6be70aa2c.slice. Oct 13 05:56:30.939700 containerd[1568]: time="2025-10-13T05:56:30.939414535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8p5bx,Uid:83ab7fef-df4c-4467-a7d6-2fb6be70aa2c,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:30.939458 systemd[1]: Created slice kubepods-besteffort-pod12495c87_6d31_4985_8605_8cf9404cc207.slice - libcontainer container kubepods-besteffort-pod12495c87_6d31_4985_8605_8cf9404cc207.slice. Oct 13 05:56:30.951362 systemd[1]: Created slice kubepods-besteffort-pod40e528e5_cb1f_414b_a4ad_47a687c267b9.slice - libcontainer container kubepods-besteffort-pod40e528e5_cb1f_414b_a4ad_47a687c267b9.slice. Oct 13 05:56:30.956840 systemd[1]: Created slice kubepods-besteffort-pod8f3a0ad4_a70d_4eec_9809_f8c1224f0f1d.slice - libcontainer container kubepods-besteffort-pod8f3a0ad4_a70d_4eec_9809_f8c1224f0f1d.slice. Oct 13 05:56:30.966109 systemd[1]: Created slice kubepods-besteffort-pod3fe8a766_96e5_4857_8f34_f1975d7a2a30.slice - libcontainer container kubepods-besteffort-pod3fe8a766_96e5_4857_8f34_f1975d7a2a30.slice. Oct 13 05:56:30.971730 systemd[1]: Created slice kubepods-burstable-podc20bd1d9_f9a0_458b_ba77_ccccb82b1f38.slice - libcontainer container kubepods-burstable-podc20bd1d9_f9a0_458b_ba77_ccccb82b1f38.slice. Oct 13 05:56:31.036442 containerd[1568]: time="2025-10-13T05:56:31.036372818Z" level=error msg="Failed to destroy network for sandbox \"bdf782b0e44c45a0f33c35cd61cf7f33ce96b6f8db6228b95ec35f6a072d4a33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.038639 systemd[1]: run-netns-cni\x2db4ccc3f0\x2d2893\x2d95dd\x2ded26\x2d0832d3e1de40.mount: Deactivated successfully. Oct 13 05:56:31.040215 containerd[1568]: time="2025-10-13T05:56:31.040152094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8p5bx,Uid:83ab7fef-df4c-4467-a7d6-2fb6be70aa2c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf782b0e44c45a0f33c35cd61cf7f33ce96b6f8db6228b95ec35f6a072d4a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.040524 kubelet[2723]: E1013 05:56:31.040467 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf782b0e44c45a0f33c35cd61cf7f33ce96b6f8db6228b95ec35f6a072d4a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.040600 kubelet[2723]: E1013 05:56:31.040559 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf782b0e44c45a0f33c35cd61cf7f33ce96b6f8db6228b95ec35f6a072d4a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:31.040600 kubelet[2723]: E1013 05:56:31.040578 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bdf782b0e44c45a0f33c35cd61cf7f33ce96b6f8db6228b95ec35f6a072d4a33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8p5bx" Oct 13 05:56:31.040679 kubelet[2723]: E1013 05:56:31.040642 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8p5bx_calico-system(83ab7fef-df4c-4467-a7d6-2fb6be70aa2c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8p5bx_calico-system(83ab7fef-df4c-4467-a7d6-2fb6be70aa2c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bdf782b0e44c45a0f33c35cd61cf7f33ce96b6f8db6228b95ec35f6a072d4a33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8p5bx" podUID="83ab7fef-df4c-4467-a7d6-2fb6be70aa2c" Oct 13 05:56:31.062090 kubelet[2723]: I1013 05:56:31.062019 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48z8j\" (UniqueName: \"kubernetes.io/projected/8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d-kube-api-access-48z8j\") pod \"calico-apiserver-657947797c-8j6v8\" (UID: \"8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d\") " pod="calico-apiserver/calico-apiserver-657947797c-8j6v8" Oct 13 05:56:31.062090 kubelet[2723]: I1013 05:56:31.062075 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5-config-volume\") pod \"coredns-66bc5c9577-5s42v\" (UID: \"0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5\") " pod="kube-system/coredns-66bc5c9577-5s42v" Oct 13 05:56:31.062090 kubelet[2723]: I1013 05:56:31.062095 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmn4f\" (UniqueName: \"kubernetes.io/projected/3fe8a766-96e5-4857-8f34-f1975d7a2a30-kube-api-access-dmn4f\") pod \"calico-kube-controllers-7948465b6c-q68c8\" (UID: \"3fe8a766-96e5-4857-8f34-f1975d7a2a30\") " pod="calico-system/calico-kube-controllers-7948465b6c-q68c8" Oct 13 05:56:31.062090 kubelet[2723]: I1013 05:56:31.062113 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzj5m\" (UniqueName: \"kubernetes.io/projected/79a8338b-50b1-414a-904a-09688f2cfcc6-kube-api-access-rzj5m\") pod \"whisker-ff4df9f49-bq2zd\" (UID: \"79a8338b-50b1-414a-904a-09688f2cfcc6\") " pod="calico-system/whisker-ff4df9f49-bq2zd" Oct 13 05:56:31.062472 kubelet[2723]: I1013 05:56:31.062192 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh97v\" (UniqueName: \"kubernetes.io/projected/0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5-kube-api-access-hh97v\") pod \"coredns-66bc5c9577-5s42v\" (UID: \"0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5\") " pod="kube-system/coredns-66bc5c9577-5s42v" Oct 13 05:56:31.062472 kubelet[2723]: I1013 05:56:31.062243 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnd5n\" (UniqueName: \"kubernetes.io/projected/c20bd1d9-f9a0-458b-ba77-ccccb82b1f38-kube-api-access-bnd5n\") pod \"coredns-66bc5c9577-bp2xn\" (UID: \"c20bd1d9-f9a0-458b-ba77-ccccb82b1f38\") " pod="kube-system/coredns-66bc5c9577-bp2xn" Oct 13 05:56:31.062472 kubelet[2723]: I1013 05:56:31.062267 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5khwm\" (UniqueName: \"kubernetes.io/projected/40e528e5-cb1f-414b-a4ad-47a687c267b9-kube-api-access-5khwm\") pod \"calico-apiserver-7cd748bcb9-d4vnc\" (UID: \"40e528e5-cb1f-414b-a4ad-47a687c267b9\") " pod="calico-apiserver/calico-apiserver-7cd748bcb9-d4vnc" Oct 13 05:56:31.062472 kubelet[2723]: I1013 05:56:31.062291 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmdpc\" (UniqueName: \"kubernetes.io/projected/12495c87-6d31-4985-8605-8cf9404cc207-kube-api-access-xmdpc\") pod \"calico-apiserver-657947797c-kdvwv\" (UID: \"12495c87-6d31-4985-8605-8cf9404cc207\") " pod="calico-apiserver/calico-apiserver-657947797c-kdvwv" Oct 13 05:56:31.062472 kubelet[2723]: I1013 05:56:31.062317 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/31c8cd51-c9af-4060-a19c-cb1f745e22dd-config\") pod \"goldmane-854f97d977-h4hz8\" (UID: \"31c8cd51-c9af-4060-a19c-cb1f745e22dd\") " pod="calico-system/goldmane-854f97d977-h4hz8" Oct 13 05:56:31.062637 kubelet[2723]: I1013 05:56:31.062338 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-backend-key-pair\") pod \"whisker-ff4df9f49-bq2zd\" (UID: \"79a8338b-50b1-414a-904a-09688f2cfcc6\") " pod="calico-system/whisker-ff4df9f49-bq2zd" Oct 13 05:56:31.062637 kubelet[2723]: I1013 05:56:31.062355 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-ca-bundle\") pod \"whisker-ff4df9f49-bq2zd\" (UID: \"79a8338b-50b1-414a-904a-09688f2cfcc6\") " pod="calico-system/whisker-ff4df9f49-bq2zd" Oct 13 05:56:31.062637 kubelet[2723]: I1013 05:56:31.062369 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c20bd1d9-f9a0-458b-ba77-ccccb82b1f38-config-volume\") pod \"coredns-66bc5c9577-bp2xn\" (UID: \"c20bd1d9-f9a0-458b-ba77-ccccb82b1f38\") " pod="kube-system/coredns-66bc5c9577-bp2xn" Oct 13 05:56:31.062637 kubelet[2723]: I1013 05:56:31.062392 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12495c87-6d31-4985-8605-8cf9404cc207-calico-apiserver-certs\") pod \"calico-apiserver-657947797c-kdvwv\" (UID: \"12495c87-6d31-4985-8605-8cf9404cc207\") " pod="calico-apiserver/calico-apiserver-657947797c-kdvwv" Oct 13 05:56:31.062637 kubelet[2723]: I1013 05:56:31.062423 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/31c8cd51-c9af-4060-a19c-cb1f745e22dd-goldmane-key-pair\") pod \"goldmane-854f97d977-h4hz8\" (UID: \"31c8cd51-c9af-4060-a19c-cb1f745e22dd\") " pod="calico-system/goldmane-854f97d977-h4hz8" Oct 13 05:56:31.062828 kubelet[2723]: I1013 05:56:31.062476 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31c8cd51-c9af-4060-a19c-cb1f745e22dd-goldmane-ca-bundle\") pod \"goldmane-854f97d977-h4hz8\" (UID: \"31c8cd51-c9af-4060-a19c-cb1f745e22dd\") " pod="calico-system/goldmane-854f97d977-h4hz8" Oct 13 05:56:31.062828 kubelet[2723]: I1013 05:56:31.062537 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbv8b\" (UniqueName: \"kubernetes.io/projected/31c8cd51-c9af-4060-a19c-cb1f745e22dd-kube-api-access-xbv8b\") pod \"goldmane-854f97d977-h4hz8\" (UID: \"31c8cd51-c9af-4060-a19c-cb1f745e22dd\") " pod="calico-system/goldmane-854f97d977-h4hz8" Oct 13 05:56:31.062828 kubelet[2723]: I1013 05:56:31.062557 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d-calico-apiserver-certs\") pod \"calico-apiserver-657947797c-8j6v8\" (UID: \"8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d\") " pod="calico-apiserver/calico-apiserver-657947797c-8j6v8" Oct 13 05:56:31.062828 kubelet[2723]: I1013 05:56:31.062592 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3fe8a766-96e5-4857-8f34-f1975d7a2a30-tigera-ca-bundle\") pod \"calico-kube-controllers-7948465b6c-q68c8\" (UID: \"3fe8a766-96e5-4857-8f34-f1975d7a2a30\") " pod="calico-system/calico-kube-controllers-7948465b6c-q68c8" Oct 13 05:56:31.062828 kubelet[2723]: I1013 05:56:31.062633 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/40e528e5-cb1f-414b-a4ad-47a687c267b9-calico-apiserver-certs\") pod \"calico-apiserver-7cd748bcb9-d4vnc\" (UID: \"40e528e5-cb1f-414b-a4ad-47a687c267b9\") " pod="calico-apiserver/calico-apiserver-7cd748bcb9-d4vnc" Oct 13 05:56:31.220401 containerd[1568]: time="2025-10-13T05:56:31.220286564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff4df9f49-bq2zd,Uid:79a8338b-50b1-414a-904a-09688f2cfcc6,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:31.224627 containerd[1568]: time="2025-10-13T05:56:31.224466242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-h4hz8,Uid:31c8cd51-c9af-4060-a19c-cb1f745e22dd,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:31.231965 containerd[1568]: time="2025-10-13T05:56:31.231919626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5s42v,Uid:0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:31.246051 containerd[1568]: time="2025-10-13T05:56:31.245988580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-kdvwv,Uid:12495c87-6d31-4985-8605-8cf9404cc207,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:31.261794 containerd[1568]: time="2025-10-13T05:56:31.261736919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd748bcb9-d4vnc,Uid:40e528e5-cb1f-414b-a4ad-47a687c267b9,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:31.262915 containerd[1568]: time="2025-10-13T05:56:31.262849602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-8j6v8,Uid:8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:31.276155 containerd[1568]: time="2025-10-13T05:56:31.275621816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7948465b6c-q68c8,Uid:3fe8a766-96e5-4857-8f34-f1975d7a2a30,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:31.279648 containerd[1568]: time="2025-10-13T05:56:31.279491843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bp2xn,Uid:c20bd1d9-f9a0-458b-ba77-ccccb82b1f38,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:31.318894 containerd[1568]: time="2025-10-13T05:56:31.318848090Z" level=error msg="Failed to destroy network for sandbox \"5c559188b3b7b67cd3ad4717c7cf65a9ed0364cf37df047f4c4e5391a2a41633\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.322727 containerd[1568]: time="2025-10-13T05:56:31.322663624Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-h4hz8,Uid:31c8cd51-c9af-4060-a19c-cb1f745e22dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c559188b3b7b67cd3ad4717c7cf65a9ed0364cf37df047f4c4e5391a2a41633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.323705 kubelet[2723]: E1013 05:56:31.323321 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c559188b3b7b67cd3ad4717c7cf65a9ed0364cf37df047f4c4e5391a2a41633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.323705 kubelet[2723]: E1013 05:56:31.323421 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c559188b3b7b67cd3ad4717c7cf65a9ed0364cf37df047f4c4e5391a2a41633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-h4hz8" Oct 13 05:56:31.323705 kubelet[2723]: E1013 05:56:31.323461 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5c559188b3b7b67cd3ad4717c7cf65a9ed0364cf37df047f4c4e5391a2a41633\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-854f97d977-h4hz8" Oct 13 05:56:31.323864 kubelet[2723]: E1013 05:56:31.323728 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-854f97d977-h4hz8_calico-system(31c8cd51-c9af-4060-a19c-cb1f745e22dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-854f97d977-h4hz8_calico-system(31c8cd51-c9af-4060-a19c-cb1f745e22dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5c559188b3b7b67cd3ad4717c7cf65a9ed0364cf37df047f4c4e5391a2a41633\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-854f97d977-h4hz8" podUID="31c8cd51-c9af-4060-a19c-cb1f745e22dd" Oct 13 05:56:31.334799 containerd[1568]: time="2025-10-13T05:56:31.334674928Z" level=error msg="Failed to destroy network for sandbox \"96df97830306781474fa958c5468f305a111f443e04ed60840720a4a86e2bb0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.337003 containerd[1568]: time="2025-10-13T05:56:31.336957038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ff4df9f49-bq2zd,Uid:79a8338b-50b1-414a-904a-09688f2cfcc6,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"96df97830306781474fa958c5468f305a111f443e04ed60840720a4a86e2bb0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.337404 kubelet[2723]: E1013 05:56:31.337355 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96df97830306781474fa958c5468f305a111f443e04ed60840720a4a86e2bb0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.337461 kubelet[2723]: E1013 05:56:31.337424 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96df97830306781474fa958c5468f305a111f443e04ed60840720a4a86e2bb0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ff4df9f49-bq2zd" Oct 13 05:56:31.337461 kubelet[2723]: E1013 05:56:31.337445 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96df97830306781474fa958c5468f305a111f443e04ed60840720a4a86e2bb0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-ff4df9f49-bq2zd" Oct 13 05:56:31.337994 kubelet[2723]: E1013 05:56:31.337759 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-ff4df9f49-bq2zd_calico-system(79a8338b-50b1-414a-904a-09688f2cfcc6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-ff4df9f49-bq2zd_calico-system(79a8338b-50b1-414a-904a-09688f2cfcc6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96df97830306781474fa958c5468f305a111f443e04ed60840720a4a86e2bb0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-ff4df9f49-bq2zd" podUID="79a8338b-50b1-414a-904a-09688f2cfcc6" Oct 13 05:56:31.346015 containerd[1568]: time="2025-10-13T05:56:31.345962901Z" level=error msg="Failed to destroy network for sandbox \"e51a12d9115a11b13907005fbce962c53df1bea89ca0109ed8ef7261c0ca6e9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.347338 containerd[1568]: time="2025-10-13T05:56:31.347291469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5s42v,Uid:0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51a12d9115a11b13907005fbce962c53df1bea89ca0109ed8ef7261c0ca6e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.347583 kubelet[2723]: E1013 05:56:31.347537 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51a12d9115a11b13907005fbce962c53df1bea89ca0109ed8ef7261c0ca6e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.347635 kubelet[2723]: E1013 05:56:31.347607 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51a12d9115a11b13907005fbce962c53df1bea89ca0109ed8ef7261c0ca6e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5s42v" Oct 13 05:56:31.347635 kubelet[2723]: E1013 05:56:31.347625 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e51a12d9115a11b13907005fbce962c53df1bea89ca0109ed8ef7261c0ca6e9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5s42v" Oct 13 05:56:31.349074 kubelet[2723]: E1013 05:56:31.348741 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5s42v_kube-system(0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5s42v_kube-system(0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e51a12d9115a11b13907005fbce962c53df1bea89ca0109ed8ef7261c0ca6e9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5s42v" podUID="0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5" Oct 13 05:56:31.366271 containerd[1568]: time="2025-10-13T05:56:31.366202126Z" level=error msg="Failed to destroy network for sandbox \"7d57c2bcd8c7409875e2723dcea88ac39c9f0d418fb086e5cf507cbf47d23508\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.370548 containerd[1568]: time="2025-10-13T05:56:31.370501339Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-kdvwv,Uid:12495c87-6d31-4985-8605-8cf9404cc207,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d57c2bcd8c7409875e2723dcea88ac39c9f0d418fb086e5cf507cbf47d23508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.371210 kubelet[2723]: E1013 05:56:31.370809 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d57c2bcd8c7409875e2723dcea88ac39c9f0d418fb086e5cf507cbf47d23508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.371210 kubelet[2723]: E1013 05:56:31.370890 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d57c2bcd8c7409875e2723dcea88ac39c9f0d418fb086e5cf507cbf47d23508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657947797c-kdvwv" Oct 13 05:56:31.371210 kubelet[2723]: E1013 05:56:31.370907 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d57c2bcd8c7409875e2723dcea88ac39c9f0d418fb086e5cf507cbf47d23508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657947797c-kdvwv" Oct 13 05:56:31.371330 kubelet[2723]: E1013 05:56:31.370964 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657947797c-kdvwv_calico-apiserver(12495c87-6d31-4985-8605-8cf9404cc207)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657947797c-kdvwv_calico-apiserver(12495c87-6d31-4985-8605-8cf9404cc207)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d57c2bcd8c7409875e2723dcea88ac39c9f0d418fb086e5cf507cbf47d23508\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-657947797c-kdvwv" podUID="12495c87-6d31-4985-8605-8cf9404cc207" Oct 13 05:56:31.383954 containerd[1568]: time="2025-10-13T05:56:31.383896965Z" level=error msg="Failed to destroy network for sandbox \"bf37bb8b64b044b4f9b191d6be72b263eb515e3a18808e63114d78c2d3fe3204\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.384564 containerd[1568]: time="2025-10-13T05:56:31.384524355Z" level=error msg="Failed to destroy network for sandbox \"73d88cd8458a8922c47a0adb5f048306a4b7ea29bdd2d82b25d1c3226527e0de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.385488 containerd[1568]: time="2025-10-13T05:56:31.385408978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd748bcb9-d4vnc,Uid:40e528e5-cb1f-414b-a4ad-47a687c267b9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf37bb8b64b044b4f9b191d6be72b263eb515e3a18808e63114d78c2d3fe3204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.385917 kubelet[2723]: E1013 05:56:31.385833 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf37bb8b64b044b4f9b191d6be72b263eb515e3a18808e63114d78c2d3fe3204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.386010 kubelet[2723]: E1013 05:56:31.385973 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf37bb8b64b044b4f9b191d6be72b263eb515e3a18808e63114d78c2d3fe3204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd748bcb9-d4vnc" Oct 13 05:56:31.386073 kubelet[2723]: E1013 05:56:31.386008 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf37bb8b64b044b4f9b191d6be72b263eb515e3a18808e63114d78c2d3fe3204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd748bcb9-d4vnc" Oct 13 05:56:31.386191 kubelet[2723]: E1013 05:56:31.386132 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd748bcb9-d4vnc_calico-apiserver(40e528e5-cb1f-414b-a4ad-47a687c267b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd748bcb9-d4vnc_calico-apiserver(40e528e5-cb1f-414b-a4ad-47a687c267b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf37bb8b64b044b4f9b191d6be72b263eb515e3a18808e63114d78c2d3fe3204\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd748bcb9-d4vnc" podUID="40e528e5-cb1f-414b-a4ad-47a687c267b9" Oct 13 05:56:31.386748 containerd[1568]: time="2025-10-13T05:56:31.386667625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-8j6v8,Uid:8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d88cd8458a8922c47a0adb5f048306a4b7ea29bdd2d82b25d1c3226527e0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.387091 kubelet[2723]: E1013 05:56:31.387027 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d88cd8458a8922c47a0adb5f048306a4b7ea29bdd2d82b25d1c3226527e0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.387091 kubelet[2723]: E1013 05:56:31.387099 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d88cd8458a8922c47a0adb5f048306a4b7ea29bdd2d82b25d1c3226527e0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657947797c-8j6v8" Oct 13 05:56:31.387267 kubelet[2723]: E1013 05:56:31.387118 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d88cd8458a8922c47a0adb5f048306a4b7ea29bdd2d82b25d1c3226527e0de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-657947797c-8j6v8" Oct 13 05:56:31.387267 kubelet[2723]: E1013 05:56:31.387178 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-657947797c-8j6v8_calico-apiserver(8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-657947797c-8j6v8_calico-apiserver(8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73d88cd8458a8922c47a0adb5f048306a4b7ea29bdd2d82b25d1c3226527e0de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-657947797c-8j6v8" podUID="8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d" Oct 13 05:56:31.391093 containerd[1568]: time="2025-10-13T05:56:31.391052309Z" level=error msg="Failed to destroy network for sandbox \"a3ff1a4173150158f9a419bc54202a79d42603d39db800a4bd144f83f6eafbc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.392557 containerd[1568]: time="2025-10-13T05:56:31.392434608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bp2xn,Uid:c20bd1d9-f9a0-458b-ba77-ccccb82b1f38,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ff1a4173150158f9a419bc54202a79d42603d39db800a4bd144f83f6eafbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.392930 kubelet[2723]: E1013 05:56:31.392895 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ff1a4173150158f9a419bc54202a79d42603d39db800a4bd144f83f6eafbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.392993 kubelet[2723]: E1013 05:56:31.392948 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ff1a4173150158f9a419bc54202a79d42603d39db800a4bd144f83f6eafbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bp2xn" Oct 13 05:56:31.392993 kubelet[2723]: E1013 05:56:31.392965 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3ff1a4173150158f9a419bc54202a79d42603d39db800a4bd144f83f6eafbc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bp2xn" Oct 13 05:56:31.393069 kubelet[2723]: E1013 05:56:31.393014 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-bp2xn_kube-system(c20bd1d9-f9a0-458b-ba77-ccccb82b1f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-bp2xn_kube-system(c20bd1d9-f9a0-458b-ba77-ccccb82b1f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3ff1a4173150158f9a419bc54202a79d42603d39db800a4bd144f83f6eafbc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bp2xn" podUID="c20bd1d9-f9a0-458b-ba77-ccccb82b1f38" Oct 13 05:56:31.393360 containerd[1568]: time="2025-10-13T05:56:31.393249360Z" level=error msg="Failed to destroy network for sandbox \"081f25dacf5b33e8a41bdb270ca4a2b8f9bf250e71549b4ec4bfd5e61f26f18f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.394906 containerd[1568]: time="2025-10-13T05:56:31.394858014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7948465b6c-q68c8,Uid:3fe8a766-96e5-4857-8f34-f1975d7a2a30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"081f25dacf5b33e8a41bdb270ca4a2b8f9bf250e71549b4ec4bfd5e61f26f18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.395106 kubelet[2723]: E1013 05:56:31.395050 2723 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081f25dacf5b33e8a41bdb270ca4a2b8f9bf250e71549b4ec4bfd5e61f26f18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 13 05:56:31.395106 kubelet[2723]: E1013 05:56:31.395088 2723 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081f25dacf5b33e8a41bdb270ca4a2b8f9bf250e71549b4ec4bfd5e61f26f18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7948465b6c-q68c8" Oct 13 05:56:31.395202 kubelet[2723]: E1013 05:56:31.395106 2723 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"081f25dacf5b33e8a41bdb270ca4a2b8f9bf250e71549b4ec4bfd5e61f26f18f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7948465b6c-q68c8" Oct 13 05:56:31.395202 kubelet[2723]: E1013 05:56:31.395153 2723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7948465b6c-q68c8_calico-system(3fe8a766-96e5-4857-8f34-f1975d7a2a30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7948465b6c-q68c8_calico-system(3fe8a766-96e5-4857-8f34-f1975d7a2a30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"081f25dacf5b33e8a41bdb270ca4a2b8f9bf250e71549b4ec4bfd5e61f26f18f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7948465b6c-q68c8" podUID="3fe8a766-96e5-4857-8f34-f1975d7a2a30" Oct 13 05:56:31.880645 containerd[1568]: time="2025-10-13T05:56:31.880598266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Oct 13 05:56:39.631142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount130772921.mount: Deactivated successfully. Oct 13 05:56:40.803610 containerd[1568]: time="2025-10-13T05:56:40.803522445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:40.804448 containerd[1568]: time="2025-10-13T05:56:40.804381638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Oct 13 05:56:40.806062 containerd[1568]: time="2025-10-13T05:56:40.806013433Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:40.808129 containerd[1568]: time="2025-10-13T05:56:40.808088339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:40.808619 containerd[1568]: time="2025-10-13T05:56:40.808572017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.927933214s" Oct 13 05:56:40.808619 containerd[1568]: time="2025-10-13T05:56:40.808617502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Oct 13 05:56:40.838389 containerd[1568]: time="2025-10-13T05:56:40.838342213Z" level=info msg="CreateContainer within sandbox \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 13 05:56:40.848858 containerd[1568]: time="2025-10-13T05:56:40.848788982Z" level=info msg="Container a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:40.868211 containerd[1568]: time="2025-10-13T05:56:40.868152674Z" level=info msg="CreateContainer within sandbox \"91a966012a134004879e593ad2d3f84eba2a18d7f16cb732313603eaaca16928\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46\"" Oct 13 05:56:40.868825 containerd[1568]: time="2025-10-13T05:56:40.868772589Z" level=info msg="StartContainer for \"a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46\"" Oct 13 05:56:40.870412 containerd[1568]: time="2025-10-13T05:56:40.870381791Z" level=info msg="connecting to shim a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46" address="unix:///run/containerd/s/2ee65ab620b74c092698a83a0b1db383621c9ab35a206259331fb2e3ff5bf28a" protocol=ttrpc version=3 Oct 13 05:56:40.899857 systemd[1]: Started cri-containerd-a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46.scope - libcontainer container a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46. Oct 13 05:56:40.957865 containerd[1568]: time="2025-10-13T05:56:40.957796408Z" level=info msg="StartContainer for \"a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46\" returns successfully" Oct 13 05:56:41.049134 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 13 05:56:41.049984 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 13 05:56:41.456139 kubelet[2723]: I1013 05:56:41.456063 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:41.738918 kubelet[2723]: I1013 05:56:41.738747 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-ca-bundle\") pod \"79a8338b-50b1-414a-904a-09688f2cfcc6\" (UID: \"79a8338b-50b1-414a-904a-09688f2cfcc6\") " Oct 13 05:56:41.739852 kubelet[2723]: I1013 05:56:41.739803 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "79a8338b-50b1-414a-904a-09688f2cfcc6" (UID: "79a8338b-50b1-414a-904a-09688f2cfcc6"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 13 05:56:41.741880 kubelet[2723]: I1013 05:56:41.741847 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-backend-key-pair\") pod \"79a8338b-50b1-414a-904a-09688f2cfcc6\" (UID: \"79a8338b-50b1-414a-904a-09688f2cfcc6\") " Oct 13 05:56:41.742342 kubelet[2723]: I1013 05:56:41.741900 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzj5m\" (UniqueName: \"kubernetes.io/projected/79a8338b-50b1-414a-904a-09688f2cfcc6-kube-api-access-rzj5m\") pod \"79a8338b-50b1-414a-904a-09688f2cfcc6\" (UID: \"79a8338b-50b1-414a-904a-09688f2cfcc6\") " Oct 13 05:56:41.742342 kubelet[2723]: I1013 05:56:41.742022 2723 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:41.747649 kubelet[2723]: I1013 05:56:41.747594 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79a8338b-50b1-414a-904a-09688f2cfcc6-kube-api-access-rzj5m" (OuterVolumeSpecName: "kube-api-access-rzj5m") pod "79a8338b-50b1-414a-904a-09688f2cfcc6" (UID: "79a8338b-50b1-414a-904a-09688f2cfcc6"). InnerVolumeSpecName "kube-api-access-rzj5m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:56:41.748517 kubelet[2723]: I1013 05:56:41.748460 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "79a8338b-50b1-414a-904a-09688f2cfcc6" (UID: "79a8338b-50b1-414a-904a-09688f2cfcc6"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:56:41.796177 systemd[1]: Removed slice kubepods-besteffort-pod79a8338b_50b1_414a_904a_09688f2cfcc6.slice - libcontainer container kubepods-besteffort-pod79a8338b_50b1_414a_904a_09688f2cfcc6.slice. Oct 13 05:56:41.814847 systemd[1]: var-lib-kubelet-pods-79a8338b\x2d50b1\x2d414a\x2d904a\x2d09688f2cfcc6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drzj5m.mount: Deactivated successfully. Oct 13 05:56:41.814961 systemd[1]: var-lib-kubelet-pods-79a8338b\x2d50b1\x2d414a\x2d904a\x2d09688f2cfcc6-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 13 05:56:41.842890 kubelet[2723]: I1013 05:56:41.842824 2723 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/79a8338b-50b1-414a-904a-09688f2cfcc6-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:41.842890 kubelet[2723]: I1013 05:56:41.842863 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzj5m\" (UniqueName: \"kubernetes.io/projected/79a8338b-50b1-414a-904a-09688f2cfcc6-kube-api-access-rzj5m\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:41.926116 kubelet[2723]: I1013 05:56:41.925262 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nmqq5" podStartSLOduration=2.128329822 podStartE2EDuration="21.925243099s" podCreationTimestamp="2025-10-13 05:56:20 +0000 UTC" firstStartedPulling="2025-10-13 05:56:21.012816284 +0000 UTC m=+17.321773838" lastFinishedPulling="2025-10-13 05:56:40.809729551 +0000 UTC m=+37.118687115" observedRunningTime="2025-10-13 05:56:41.925049074 +0000 UTC m=+38.234006648" watchObservedRunningTime="2025-10-13 05:56:41.925243099 +0000 UTC m=+38.234200663" Oct 13 05:56:42.072103 systemd[1]: Created slice kubepods-besteffort-pod514c8894_3a0b_4b4b_84ac_d41f652dde18.slice - libcontainer container kubepods-besteffort-pod514c8894_3a0b_4b4b_84ac_d41f652dde18.slice. Oct 13 05:56:42.144858 kubelet[2723]: I1013 05:56:42.144786 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzvkr\" (UniqueName: \"kubernetes.io/projected/514c8894-3a0b-4b4b-84ac-d41f652dde18-kube-api-access-tzvkr\") pod \"whisker-7f99db6c64-kl67n\" (UID: \"514c8894-3a0b-4b4b-84ac-d41f652dde18\") " pod="calico-system/whisker-7f99db6c64-kl67n" Oct 13 05:56:42.144858 kubelet[2723]: I1013 05:56:42.144850 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/514c8894-3a0b-4b4b-84ac-d41f652dde18-whisker-ca-bundle\") pod \"whisker-7f99db6c64-kl67n\" (UID: \"514c8894-3a0b-4b4b-84ac-d41f652dde18\") " pod="calico-system/whisker-7f99db6c64-kl67n" Oct 13 05:56:42.144858 kubelet[2723]: I1013 05:56:42.144874 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/514c8894-3a0b-4b4b-84ac-d41f652dde18-whisker-backend-key-pair\") pod \"whisker-7f99db6c64-kl67n\" (UID: \"514c8894-3a0b-4b4b-84ac-d41f652dde18\") " pod="calico-system/whisker-7f99db6c64-kl67n" Oct 13 05:56:42.379619 containerd[1568]: time="2025-10-13T05:56:42.379488064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f99db6c64-kl67n,Uid:514c8894-3a0b-4b4b-84ac-d41f652dde18,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:42.550134 systemd-networkd[1493]: calib3959435b50: Link UP Oct 13 05:56:42.550401 systemd-networkd[1493]: calib3959435b50: Gained carrier Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.405 [INFO][3933] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.426 [INFO][3933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f99db6c64--kl67n-eth0 whisker-7f99db6c64- calico-system 514c8894-3a0b-4b4b-84ac-d41f652dde18 892 0 2025-10-13 05:56:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f99db6c64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f99db6c64-kl67n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib3959435b50 [] [] }} ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.427 [INFO][3933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.497 [INFO][3948] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" HandleID="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Workload="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.497 [INFO][3948] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" HandleID="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Workload="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a2a10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f99db6c64-kl67n", "timestamp":"2025-10-13 05:56:42.497156103 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.497 [INFO][3948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.497 [INFO][3948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.498 [INFO][3948] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.508 [INFO][3948] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.515 [INFO][3948] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.520 [INFO][3948] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.522 [INFO][3948] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.525 [INFO][3948] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.525 [INFO][3948] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.526 [INFO][3948] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238 Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.531 [INFO][3948] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.537 [INFO][3948] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.537 [INFO][3948] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" host="localhost" Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.537 [INFO][3948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:42.568059 containerd[1568]: 2025-10-13 05:56:42.537 [INFO][3948] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" HandleID="k8s-pod-network.514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Workload="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.569192 containerd[1568]: 2025-10-13 05:56:42.541 [INFO][3933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f99db6c64--kl67n-eth0", GenerateName:"whisker-7f99db6c64-", Namespace:"calico-system", SelfLink:"", UID:"514c8894-3a0b-4b4b-84ac-d41f652dde18", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f99db6c64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f99db6c64-kl67n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib3959435b50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.569192 containerd[1568]: 2025-10-13 05:56:42.541 [INFO][3933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.569192 containerd[1568]: 2025-10-13 05:56:42.541 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3959435b50 ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.569192 containerd[1568]: 2025-10-13 05:56:42.551 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.569192 containerd[1568]: 2025-10-13 05:56:42.553 [INFO][3933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f99db6c64--kl67n-eth0", GenerateName:"whisker-7f99db6c64-", Namespace:"calico-system", SelfLink:"", UID:"514c8894-3a0b-4b4b-84ac-d41f652dde18", ResourceVersion:"892", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f99db6c64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238", Pod:"whisker-7f99db6c64-kl67n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib3959435b50", MAC:"26:09:59:d3:08:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:42.569192 containerd[1568]: 2025-10-13 05:56:42.564 [INFO][3933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" Namespace="calico-system" Pod="whisker-7f99db6c64-kl67n" WorkloadEndpoint="localhost-k8s-whisker--7f99db6c64--kl67n-eth0" Oct 13 05:56:42.765682 containerd[1568]: time="2025-10-13T05:56:42.765554046Z" level=info msg="connecting to shim 514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238" address="unix:///run/containerd/s/3aaa9375ac90ef093355d0042fff854d066d690dd547ec5e6541821739cef2d7" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:42.789956 containerd[1568]: time="2025-10-13T05:56:42.789877785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-h4hz8,Uid:31c8cd51-c9af-4060-a19c-cb1f745e22dd,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:42.790857 systemd[1]: Started cri-containerd-514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238.scope - libcontainer container 514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238. Oct 13 05:56:42.817132 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:42.904984 containerd[1568]: time="2025-10-13T05:56:42.904076249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f99db6c64-kl67n,Uid:514c8894-3a0b-4b4b-84ac-d41f652dde18,Namespace:calico-system,Attempt:0,} returns sandbox id \"514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238\"" Oct 13 05:56:42.922284 containerd[1568]: time="2025-10-13T05:56:42.922238070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Oct 13 05:56:43.031572 systemd-networkd[1493]: calic9df49e1ae1: Link UP Oct 13 05:56:43.033095 systemd-networkd[1493]: calic9df49e1ae1: Gained carrier Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.856 [INFO][4000] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.871 [INFO][4000] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--854f97d977--h4hz8-eth0 goldmane-854f97d977- calico-system 31c8cd51-c9af-4060-a19c-cb1f745e22dd 809 0 2025-10-13 05:56:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:854f97d977 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-854f97d977-h4hz8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic9df49e1ae1 [] [] }} ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.871 [INFO][4000] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.962 [INFO][4090] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" HandleID="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Workload="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.966 [INFO][4090] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" HandleID="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Workload="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000479bd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-854f97d977-h4hz8", "timestamp":"2025-10-13 05:56:42.962326879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.966 [INFO][4090] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.966 [INFO][4090] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.967 [INFO][4090] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.977 [INFO][4090] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:42.996 [INFO][4090] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.003 [INFO][4090] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.005 [INFO][4090] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.008 [INFO][4090] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.008 [INFO][4090] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.010 [INFO][4090] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3 Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.014 [INFO][4090] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.019 [INFO][4090] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.020 [INFO][4090] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" host="localhost" Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.020 [INFO][4090] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:43.056674 containerd[1568]: 2025-10-13 05:56:43.020 [INFO][4090] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" HandleID="k8s-pod-network.34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Workload="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.057390 containerd[1568]: 2025-10-13 05:56:43.027 [INFO][4000] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--h4hz8-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"31c8cd51-c9af-4060-a19c-cb1f745e22dd", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-854f97d977-h4hz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9df49e1ae1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:43.057390 containerd[1568]: 2025-10-13 05:56:43.027 [INFO][4000] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.057390 containerd[1568]: 2025-10-13 05:56:43.027 [INFO][4000] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9df49e1ae1 ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.057390 containerd[1568]: 2025-10-13 05:56:43.033 [INFO][4000] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.057390 containerd[1568]: 2025-10-13 05:56:43.034 [INFO][4000] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--854f97d977--h4hz8-eth0", GenerateName:"goldmane-854f97d977-", Namespace:"calico-system", SelfLink:"", UID:"31c8cd51-c9af-4060-a19c-cb1f745e22dd", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"854f97d977", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3", Pod:"goldmane-854f97d977-h4hz8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9df49e1ae1", MAC:"b6:c7:fe:5f:a2:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:43.057390 containerd[1568]: 2025-10-13 05:56:43.050 [INFO][4000] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" Namespace="calico-system" Pod="goldmane-854f97d977-h4hz8" WorkloadEndpoint="localhost-k8s-goldmane--854f97d977--h4hz8-eth0" Oct 13 05:56:43.087203 containerd[1568]: time="2025-10-13T05:56:43.087133657Z" level=info msg="connecting to shim 34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3" address="unix:///run/containerd/s/146b2266be69d582a020f3b0fa9b6eb4c2c3c89e8b803b06e3538f88f75ef336" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:43.126918 systemd[1]: Started cri-containerd-34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3.scope - libcontainer container 34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3. Oct 13 05:56:43.142412 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:43.180961 containerd[1568]: time="2025-10-13T05:56:43.180893081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-854f97d977-h4hz8,Uid:31c8cd51-c9af-4060-a19c-cb1f745e22dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3\"" Oct 13 05:56:43.436910 systemd-networkd[1493]: vxlan.calico: Link UP Oct 13 05:56:43.436921 systemd-networkd[1493]: vxlan.calico: Gained carrier Oct 13 05:56:43.788606 kubelet[2723]: I1013 05:56:43.788570 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79a8338b-50b1-414a-904a-09688f2cfcc6" path="/var/lib/kubelet/pods/79a8338b-50b1-414a-904a-09688f2cfcc6/volumes" Oct 13 05:56:43.833755 containerd[1568]: time="2025-10-13T05:56:43.833712764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-kdvwv,Uid:12495c87-6d31-4985-8605-8cf9404cc207,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:43.839643 containerd[1568]: time="2025-10-13T05:56:43.839602391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-8j6v8,Uid:8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:43.841952 containerd[1568]: time="2025-10-13T05:56:43.841926154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd748bcb9-d4vnc,Uid:40e528e5-cb1f-414b-a4ad-47a687c267b9,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:43.927996 systemd-networkd[1493]: calib3959435b50: Gained IPv6LL Oct 13 05:56:43.968850 systemd-networkd[1493]: cali632299b2d96: Link UP Oct 13 05:56:43.969654 systemd-networkd[1493]: cali632299b2d96: Gained carrier Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.891 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0 calico-apiserver-657947797c- calico-apiserver 12495c87-6d31-4985-8605-8cf9404cc207 810 0 2025-10-13 05:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657947797c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-657947797c-kdvwv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali632299b2d96 [] [] }} ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.891 [INFO][4283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.926 [INFO][4327] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.927 [INFO][4327] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f730), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-657947797c-kdvwv", "timestamp":"2025-10-13 05:56:43.926778085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.927 [INFO][4327] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.927 [INFO][4327] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.927 [INFO][4327] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.936 [INFO][4327] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.941 [INFO][4327] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.946 [INFO][4327] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.947 [INFO][4327] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.949 [INFO][4327] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.949 [INFO][4327] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.951 [INFO][4327] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84 Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.955 [INFO][4327] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.960 [INFO][4327] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.960 [INFO][4327] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" host="localhost" Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.960 [INFO][4327] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:43.984158 containerd[1568]: 2025-10-13 05:56:43.960 [INFO][4327] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:43.984914 containerd[1568]: 2025-10-13 05:56:43.965 [INFO][4283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0", GenerateName:"calico-apiserver-657947797c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12495c87-6d31-4985-8605-8cf9404cc207", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657947797c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-657947797c-kdvwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali632299b2d96", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:43.984914 containerd[1568]: 2025-10-13 05:56:43.965 [INFO][4283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:43.984914 containerd[1568]: 2025-10-13 05:56:43.965 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali632299b2d96 ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:43.984914 containerd[1568]: 2025-10-13 05:56:43.969 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:43.984914 containerd[1568]: 2025-10-13 05:56:43.972 [INFO][4283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0", GenerateName:"calico-apiserver-657947797c-", Namespace:"calico-apiserver", SelfLink:"", UID:"12495c87-6d31-4985-8605-8cf9404cc207", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657947797c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84", Pod:"calico-apiserver-657947797c-kdvwv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali632299b2d96", MAC:"3e:0a:82:2c:6d:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:43.984914 containerd[1568]: 2025-10-13 05:56:43.981 [INFO][4283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-kdvwv" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:44.023812 containerd[1568]: time="2025-10-13T05:56:44.023475111Z" level=info msg="connecting to shim 91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" address="unix:///run/containerd/s/52d4596288f1ff0e3ad35d2b2ca3d3dfd7f96cab999ee6bff3fd415ee13803ca" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:44.050856 systemd[1]: Started cri-containerd-91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84.scope - libcontainer container 91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84. Oct 13 05:56:44.070499 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:44.079893 systemd-networkd[1493]: cali8958bbc73ee: Link UP Oct 13 05:56:44.080955 systemd-networkd[1493]: cali8958bbc73ee: Gained carrier Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.901 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0 calico-apiserver-657947797c- calico-apiserver 8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d 814 0 2025-10-13 05:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:657947797c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-657947797c-8j6v8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8958bbc73ee [] [] }} ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.905 [INFO][4293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.947 [INFO][4336] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" HandleID="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Workload="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.948 [INFO][4336] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" HandleID="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Workload="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-657947797c-8j6v8", "timestamp":"2025-10-13 05:56:43.947918054 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.948 [INFO][4336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.961 [INFO][4336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:43.963 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.038 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.044 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.048 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.050 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.054 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.054 [INFO][4336] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.055 [INFO][4336] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51 Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.059 [INFO][4336] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.067 [INFO][4336] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.067 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" host="localhost" Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.068 [INFO][4336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:44.103290 containerd[1568]: 2025-10-13 05:56:44.068 [INFO][4336] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" HandleID="k8s-pod-network.84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Workload="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.104096 containerd[1568]: 2025-10-13 05:56:44.071 [INFO][4293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0", GenerateName:"calico-apiserver-657947797c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657947797c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-657947797c-8j6v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8958bbc73ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:44.104096 containerd[1568]: 2025-10-13 05:56:44.073 [INFO][4293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.104096 containerd[1568]: 2025-10-13 05:56:44.073 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8958bbc73ee ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.104096 containerd[1568]: 2025-10-13 05:56:44.082 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.104096 containerd[1568]: 2025-10-13 05:56:44.082 [INFO][4293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0", GenerateName:"calico-apiserver-657947797c-", Namespace:"calico-apiserver", SelfLink:"", UID:"8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"657947797c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51", Pod:"calico-apiserver-657947797c-8j6v8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8958bbc73ee", MAC:"2e:7f:89:0d:5c:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:44.104096 containerd[1568]: 2025-10-13 05:56:44.093 [INFO][4293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" Namespace="calico-apiserver" Pod="calico-apiserver-657947797c-8j6v8" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--8j6v8-eth0" Oct 13 05:56:44.132261 containerd[1568]: time="2025-10-13T05:56:44.131477869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-kdvwv,Uid:12495c87-6d31-4985-8605-8cf9404cc207,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\"" Oct 13 05:56:44.163484 containerd[1568]: time="2025-10-13T05:56:44.163408610Z" level=info msg="connecting to shim 84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51" address="unix:///run/containerd/s/e3c24e3eae3a061f0b7a5626329d1c0951959d45b51e62b66b11d49e877be6cd" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:44.182870 systemd-networkd[1493]: calic9df49e1ae1: Gained IPv6LL Oct 13 05:56:44.189820 systemd-networkd[1493]: cali61b5c003f77: Link UP Oct 13 05:56:44.192101 systemd-networkd[1493]: cali61b5c003f77: Gained carrier Oct 13 05:56:44.205871 systemd[1]: Started cri-containerd-84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51.scope - libcontainer container 84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51. Oct 13 05:56:44.219946 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:43.898 [INFO][4294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0 calico-apiserver-7cd748bcb9- calico-apiserver 40e528e5-cb1f-414b-a4ad-47a687c267b9 808 0 2025-10-13 05:56:18 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd748bcb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd748bcb9-d4vnc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali61b5c003f77 [] [] }} ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:43.898 [INFO][4294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:43.948 [INFO][4334] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" HandleID="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Workload="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:43.948 [INFO][4334] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" HandleID="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Workload="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd748bcb9-d4vnc", "timestamp":"2025-10-13 05:56:43.948081109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:43.948 [INFO][4334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.068 [INFO][4334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.068 [INFO][4334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.138 [INFO][4334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.144 [INFO][4334] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.150 [INFO][4334] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.152 [INFO][4334] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.155 [INFO][4334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.155 [INFO][4334] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.157 [INFO][4334] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73 Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.163 [INFO][4334] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.172 [INFO][4334] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.172 [INFO][4334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" host="localhost" Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.172 [INFO][4334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:44.236706 containerd[1568]: 2025-10-13 05:56:44.172 [INFO][4334] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" HandleID="k8s-pod-network.a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Workload="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.238018 containerd[1568]: 2025-10-13 05:56:44.177 [INFO][4294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0", GenerateName:"calico-apiserver-7cd748bcb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"40e528e5-cb1f-414b-a4ad-47a687c267b9", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd748bcb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd748bcb9-d4vnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61b5c003f77", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:44.238018 containerd[1568]: 2025-10-13 05:56:44.177 [INFO][4294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.238018 containerd[1568]: 2025-10-13 05:56:44.177 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali61b5c003f77 ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.238018 containerd[1568]: 2025-10-13 05:56:44.192 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.238018 containerd[1568]: 2025-10-13 05:56:44.193 [INFO][4294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0", GenerateName:"calico-apiserver-7cd748bcb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"40e528e5-cb1f-414b-a4ad-47a687c267b9", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd748bcb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73", Pod:"calico-apiserver-7cd748bcb9-d4vnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali61b5c003f77", MAC:"4e:38:fc:25:5e:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:44.238018 containerd[1568]: 2025-10-13 05:56:44.225 [INFO][4294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-d4vnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--d4vnc-eth0" Oct 13 05:56:44.263326 containerd[1568]: time="2025-10-13T05:56:44.263280575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-657947797c-8j6v8,Uid:8f3a0ad4-a70d-4eec-9809-f8c1224f0f1d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51\"" Oct 13 05:56:44.265767 containerd[1568]: time="2025-10-13T05:56:44.265723572Z" level=info msg="connecting to shim a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73" address="unix:///run/containerd/s/3806799da3d41f85de0094691acab9265e64808a4ec08e8ee87ed050bcb7e67b" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:44.302056 systemd[1]: Started cri-containerd-a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73.scope - libcontainer container a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73. Oct 13 05:56:44.322763 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:44.362231 containerd[1568]: time="2025-10-13T05:56:44.362187318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd748bcb9-d4vnc,Uid:40e528e5-cb1f-414b-a4ad-47a687c267b9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73\"" Oct 13 05:56:44.405591 containerd[1568]: time="2025-10-13T05:56:44.405522128Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:44.407801 containerd[1568]: time="2025-10-13T05:56:44.407737687Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:44.409978 containerd[1568]: time="2025-10-13T05:56:44.409919904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:44.410423 containerd[1568]: time="2025-10-13T05:56:44.410396338Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.488118283s" Oct 13 05:56:44.410471 containerd[1568]: time="2025-10-13T05:56:44.410424571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Oct 13 05:56:44.411944 containerd[1568]: time="2025-10-13T05:56:44.411903978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Oct 13 05:56:44.415809 containerd[1568]: time="2025-10-13T05:56:44.415763163Z" level=info msg="CreateContainer within sandbox \"514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Oct 13 05:56:44.416480 containerd[1568]: time="2025-10-13T05:56:44.416448079Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Oct 13 05:56:44.423783 containerd[1568]: time="2025-10-13T05:56:44.423730911Z" level=info msg="Container 4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:44.431280 containerd[1568]: time="2025-10-13T05:56:44.431248784Z" level=info msg="CreateContainer within sandbox \"514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd\"" Oct 13 05:56:44.431725 containerd[1568]: time="2025-10-13T05:56:44.431670886Z" level=info msg="StartContainer for \"4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd\"" Oct 13 05:56:44.432933 containerd[1568]: time="2025-10-13T05:56:44.432903650Z" level=info msg="connecting to shim 4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd" address="unix:///run/containerd/s/3aaa9375ac90ef093355d0042fff854d066d690dd547ec5e6541821739cef2d7" protocol=ttrpc version=3 Oct 13 05:56:44.471829 systemd[1]: Started cri-containerd-4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd.scope - libcontainer container 4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd. Oct 13 05:56:44.520239 containerd[1568]: time="2025-10-13T05:56:44.520190119Z" level=info msg="StartContainer for \"4bcc534f5d9361e5c191001c521da15f2b105cdeb442a8c90a09d57924682dfd\" returns successfully" Oct 13 05:56:44.850964 containerd[1568]: time="2025-10-13T05:56:44.850917426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8p5bx,Uid:83ab7fef-df4c-4467-a7d6-2fb6be70aa2c,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:44.854149 containerd[1568]: time="2025-10-13T05:56:44.854118846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5s42v,Uid:0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:44.948930 systemd-networkd[1493]: vxlan.calico: Gained IPv6LL Oct 13 05:56:44.976808 systemd-networkd[1493]: calie2649f24b6c: Link UP Oct 13 05:56:44.977057 systemd-networkd[1493]: calie2649f24b6c: Gained carrier Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.903 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8p5bx-eth0 csi-node-driver- calico-system 83ab7fef-df4c-4467-a7d6-2fb6be70aa2c 703 0 2025-10-13 05:56:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:f8549cf5c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8p5bx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2649f24b6c [] [] }} ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.903 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.931 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" HandleID="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Workload="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.931 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" HandleID="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Workload="localhost-k8s-csi--node--driver--8p5bx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8p5bx", "timestamp":"2025-10-13 05:56:44.931380205 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.931 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.931 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.931 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.939 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.944 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.949 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.951 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.956 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.956 [INFO][4590] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.957 [INFO][4590] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.962 [INFO][4590] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.969 [INFO][4590] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.969 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" host="localhost" Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.969 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:44.996180 containerd[1568]: 2025-10-13 05:56:44.969 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" HandleID="k8s-pod-network.a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Workload="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:44.997131 containerd[1568]: 2025-10-13 05:56:44.973 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8p5bx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8p5bx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2649f24b6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:44.997131 containerd[1568]: 2025-10-13 05:56:44.973 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:44.997131 containerd[1568]: 2025-10-13 05:56:44.973 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2649f24b6c ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:44.997131 containerd[1568]: 2025-10-13 05:56:44.977 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:44.997131 containerd[1568]: 2025-10-13 05:56:44.978 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8p5bx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"83ab7fef-df4c-4467-a7d6-2fb6be70aa2c", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"f8549cf5c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db", Pod:"csi-node-driver-8p5bx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2649f24b6c", MAC:"c6:4c:16:22:42:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:44.997131 containerd[1568]: 2025-10-13 05:56:44.993 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" Namespace="calico-system" Pod="csi-node-driver-8p5bx" WorkloadEndpoint="localhost-k8s-csi--node--driver--8p5bx-eth0" Oct 13 05:56:45.512332 systemd-networkd[1493]: cali8229d01abc6: Link UP Oct 13 05:56:45.512591 systemd-networkd[1493]: cali8229d01abc6: Gained carrier Oct 13 05:56:45.537033 containerd[1568]: time="2025-10-13T05:56:45.536606541Z" level=info msg="connecting to shim a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db" address="unix:///run/containerd/s/950c4c85285508c3a1e1834b53182816460bd9394259742dad7638263c0d4283" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.903 [INFO][4569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--5s42v-eth0 coredns-66bc5c9577- kube-system 0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5 811 0 2025-10-13 05:56:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-5s42v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8229d01abc6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.903 [INFO][4569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.938 [INFO][4588] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" HandleID="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Workload="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.938 [INFO][4588] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" HandleID="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Workload="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-5s42v", "timestamp":"2025-10-13 05:56:44.938450528 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.938 [INFO][4588] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.970 [INFO][4588] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:44.970 [INFO][4588] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.040 [INFO][4588] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.045 [INFO][4588] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.050 [INFO][4588] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.052 [INFO][4588] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.054 [INFO][4588] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.054 [INFO][4588] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.217 [INFO][4588] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093 Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.296 [INFO][4588] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.504 [INFO][4588] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.504 [INFO][4588] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" host="localhost" Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.505 [INFO][4588] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:45.541802 containerd[1568]: 2025-10-13 05:56:45.505 [INFO][4588] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" HandleID="k8s-pod-network.96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Workload="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.542551 containerd[1568]: 2025-10-13 05:56:45.509 [INFO][4569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5s42v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-5s42v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8229d01abc6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:45.542551 containerd[1568]: 2025-10-13 05:56:45.509 [INFO][4569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.542551 containerd[1568]: 2025-10-13 05:56:45.509 [INFO][4569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8229d01abc6 ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.542551 containerd[1568]: 2025-10-13 05:56:45.513 [INFO][4569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.542770 containerd[1568]: 2025-10-13 05:56:45.514 [INFO][4569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--5s42v-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093", Pod:"coredns-66bc5c9577-5s42v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8229d01abc6", MAC:"c6:7b:8d:12:0a:79", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:45.542770 containerd[1568]: 2025-10-13 05:56:45.538 [INFO][4569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" Namespace="kube-system" Pod="coredns-66bc5c9577-5s42v" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--5s42v-eth0" Oct 13 05:56:45.572045 systemd[1]: Started cri-containerd-a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db.scope - libcontainer container a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db. Oct 13 05:56:45.583211 containerd[1568]: time="2025-10-13T05:56:45.583076349Z" level=info msg="connecting to shim 96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093" address="unix:///run/containerd/s/f55d1b860e8488b07ad11f5b25c1f74a55d8f6da4d6e826ea06ef42e5f4ff498" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:45.594837 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:45.609932 systemd[1]: Started cri-containerd-96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093.scope - libcontainer container 96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093. Oct 13 05:56:45.616139 containerd[1568]: time="2025-10-13T05:56:45.615998117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8p5bx,Uid:83ab7fef-df4c-4467-a7d6-2fb6be70aa2c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db\"" Oct 13 05:56:45.626715 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:45.652998 systemd-networkd[1493]: cali61b5c003f77: Gained IPv6LL Oct 13 05:56:45.665933 containerd[1568]: time="2025-10-13T05:56:45.665892284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5s42v,Uid:0de5fe5b-5d3a-4e6d-bc7a-da4c7be000c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093\"" Oct 13 05:56:45.671180 containerd[1568]: time="2025-10-13T05:56:45.671145685Z" level=info msg="CreateContainer within sandbox \"96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:56:45.684856 containerd[1568]: time="2025-10-13T05:56:45.684827237Z" level=info msg="Container 728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:45.692588 containerd[1568]: time="2025-10-13T05:56:45.692560283Z" level=info msg="CreateContainer within sandbox \"96d1c77215266dde11b0fe665489a5688c9963bfb79aa3a660b43126f8f99093\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a\"" Oct 13 05:56:45.693170 containerd[1568]: time="2025-10-13T05:56:45.693135423Z" level=info msg="StartContainer for \"728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a\"" Oct 13 05:56:45.708060 containerd[1568]: time="2025-10-13T05:56:45.707978314Z" level=info msg="connecting to shim 728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a" address="unix:///run/containerd/s/f55d1b860e8488b07ad11f5b25c1f74a55d8f6da4d6e826ea06ef42e5f4ff498" protocol=ttrpc version=3 Oct 13 05:56:45.736145 systemd[1]: Started cri-containerd-728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a.scope - libcontainer container 728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a. Oct 13 05:56:45.762915 kubelet[2723]: I1013 05:56:45.762864 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:45.781921 systemd-networkd[1493]: cali632299b2d96: Gained IPv6LL Oct 13 05:56:45.785516 containerd[1568]: time="2025-10-13T05:56:45.785278535Z" level=info msg="StartContainer for \"728c0ef8902a8f1092008ccbf6c108ab42e953298be53cba2e62370ae5eb2d3a\" returns successfully" Oct 13 05:56:45.793444 containerd[1568]: time="2025-10-13T05:56:45.793399810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7948465b6c-q68c8,Uid:3fe8a766-96e5-4857-8f34-f1975d7a2a30,Namespace:calico-system,Attempt:0,}" Oct 13 05:56:45.830499 systemd[1]: Started sshd@7-10.0.0.145:22-10.0.0.1:49446.service - OpenSSH per-connection server daemon (10.0.0.1:49446). Oct 13 05:56:45.972935 systemd-networkd[1493]: cali8958bbc73ee: Gained IPv6LL Oct 13 05:56:45.999732 sshd[4766]: Accepted publickey for core from 10.0.0.1 port 49446 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:56:46.004546 sshd-session[4766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:46.016281 systemd-logind[1543]: New session 8 of user core. Oct 13 05:56:46.022867 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 13 05:56:46.040503 kubelet[2723]: I1013 05:56:46.040437 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5s42v" podStartSLOduration=37.040416421 podStartE2EDuration="37.040416421s" podCreationTimestamp="2025-10-13 05:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:45.987883778 +0000 UTC m=+42.296841342" watchObservedRunningTime="2025-10-13 05:56:46.040416421 +0000 UTC m=+42.349374005" Oct 13 05:56:46.050473 systemd-networkd[1493]: calif472a56164f: Link UP Oct 13 05:56:46.053223 systemd-networkd[1493]: calif472a56164f: Gained carrier Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.892 [INFO][4748] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0 calico-kube-controllers-7948465b6c- calico-system 3fe8a766-96e5-4857-8f34-f1975d7a2a30 812 0 2025-10-13 05:56:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7948465b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7948465b6c-q68c8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif472a56164f [] [] }} ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.892 [INFO][4748] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.931 [INFO][4789] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" HandleID="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Workload="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.931 [INFO][4789] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" HandleID="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Workload="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7948465b6c-q68c8", "timestamp":"2025-10-13 05:56:45.931523934 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.931 [INFO][4789] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.932 [INFO][4789] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.932 [INFO][4789] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.937 [INFO][4789] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.955 [INFO][4789] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.963 [INFO][4789] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.966 [INFO][4789] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.976 [INFO][4789] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.977 [INFO][4789] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.981 [INFO][4789] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:45.992 [INFO][4789] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:46.019 [INFO][4789] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:46.020 [INFO][4789] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" host="localhost" Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:46.020 [INFO][4789] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:46.074253 containerd[1568]: 2025-10-13 05:56:46.020 [INFO][4789] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" HandleID="k8s-pod-network.7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Workload="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.075915 containerd[1568]: 2025-10-13 05:56:46.036 [INFO][4748] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0", GenerateName:"calico-kube-controllers-7948465b6c-", Namespace:"calico-system", SelfLink:"", UID:"3fe8a766-96e5-4857-8f34-f1975d7a2a30", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7948465b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7948465b6c-q68c8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif472a56164f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:46.075915 containerd[1568]: 2025-10-13 05:56:46.036 [INFO][4748] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.075915 containerd[1568]: 2025-10-13 05:56:46.036 [INFO][4748] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif472a56164f ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.075915 containerd[1568]: 2025-10-13 05:56:46.055 [INFO][4748] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.075915 containerd[1568]: 2025-10-13 05:56:46.056 [INFO][4748] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0", GenerateName:"calico-kube-controllers-7948465b6c-", Namespace:"calico-system", SelfLink:"", UID:"3fe8a766-96e5-4857-8f34-f1975d7a2a30", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7948465b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d", Pod:"calico-kube-controllers-7948465b6c-q68c8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif472a56164f", MAC:"f6:22:ba:0c:a5:11", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:46.075915 containerd[1568]: 2025-10-13 05:56:46.071 [INFO][4748] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" Namespace="calico-system" Pod="calico-kube-controllers-7948465b6c-q68c8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7948465b6c--q68c8-eth0" Oct 13 05:56:46.087919 containerd[1568]: time="2025-10-13T05:56:46.087873797Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46\" id:\"778f9dcb9e26050dad90a2284e4ff95ab0b04f9128cdf6d8237ec6e5a8bd043b\" pid:4775 exited_at:{seconds:1760335006 nanos:81071741}" Oct 13 05:56:46.119121 containerd[1568]: time="2025-10-13T05:56:46.119067648Z" level=info msg="connecting to shim 7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d" address="unix:///run/containerd/s/f19ec09049e71c8abc129f7818eeee34adb75d9c4764a8b939c00b07eba2448d" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:46.180012 systemd[1]: Started cri-containerd-7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d.scope - libcontainer container 7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d. Oct 13 05:56:46.215971 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:46.440589 containerd[1568]: time="2025-10-13T05:56:46.440433024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46\" id:\"4d93101c62132bbd33cb0ba9e3bed18ebf0ee4d701e05f21e8bfbb1191dc7664\" pid:4856 exited_at:{seconds:1760335006 nanos:439811828}" Oct 13 05:56:46.475483 containerd[1568]: time="2025-10-13T05:56:46.475430913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7948465b6c-q68c8,Uid:3fe8a766-96e5-4857-8f34-f1975d7a2a30,Namespace:calico-system,Attempt:0,} returns sandbox id \"7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d\"" Oct 13 05:56:46.478659 sshd[4804]: Connection closed by 10.0.0.1 port 49446 Oct 13 05:56:46.479531 sshd-session[4766]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:46.485212 systemd[1]: sshd@7-10.0.0.145:22-10.0.0.1:49446.service: Deactivated successfully. Oct 13 05:56:46.487457 systemd[1]: session-8.scope: Deactivated successfully. Oct 13 05:56:46.487462 systemd-networkd[1493]: calie2649f24b6c: Gained IPv6LL Oct 13 05:56:46.488548 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Oct 13 05:56:46.500514 systemd-logind[1543]: Removed session 8. Oct 13 05:56:46.790306 containerd[1568]: time="2025-10-13T05:56:46.790263728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bp2xn,Uid:c20bd1d9-f9a0-458b-ba77-ccccb82b1f38,Namespace:kube-system,Attempt:0,}" Oct 13 05:56:46.798768 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3122529108.mount: Deactivated successfully. Oct 13 05:56:46.996959 systemd-networkd[1493]: cali8229d01abc6: Gained IPv6LL Oct 13 05:56:47.201430 systemd-networkd[1493]: cali79bbe4bf328: Link UP Oct 13 05:56:47.201620 systemd-networkd[1493]: cali79bbe4bf328: Gained carrier Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.131 [INFO][4902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--bp2xn-eth0 coredns-66bc5c9577- kube-system c20bd1d9-f9a0-458b-ba77-ccccb82b1f38 813 0 2025-10-13 05:56:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-bp2xn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79bbe4bf328 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.131 [INFO][4902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.160 [INFO][4923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" HandleID="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Workload="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.160 [INFO][4923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" HandleID="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Workload="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7010), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-bp2xn", "timestamp":"2025-10-13 05:56:47.160116814 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.160 [INFO][4923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.160 [INFO][4923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.160 [INFO][4923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.168 [INFO][4923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.172 [INFO][4923] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.175 [INFO][4923] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.177 [INFO][4923] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.179 [INFO][4923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.179 [INFO][4923] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.181 [INFO][4923] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123 Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.185 [INFO][4923] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.193 [INFO][4923] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.193 [INFO][4923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" host="localhost" Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.193 [INFO][4923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:47.222117 containerd[1568]: 2025-10-13 05:56:47.193 [INFO][4923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" HandleID="k8s-pod-network.83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Workload="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.224156 containerd[1568]: 2025-10-13 05:56:47.197 [INFO][4902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bp2xn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c20bd1d9-f9a0-458b-ba77-ccccb82b1f38", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-bp2xn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79bbe4bf328", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:47.224156 containerd[1568]: 2025-10-13 05:56:47.197 [INFO][4902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.224156 containerd[1568]: 2025-10-13 05:56:47.197 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79bbe4bf328 ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.224156 containerd[1568]: 2025-10-13 05:56:47.200 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.224291 containerd[1568]: 2025-10-13 05:56:47.201 [INFO][4902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--bp2xn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c20bd1d9-f9a0-458b-ba77-ccccb82b1f38", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123", Pod:"coredns-66bc5c9577-bp2xn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79bbe4bf328", MAC:"aa:b3:d1:e6:9b:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:47.224291 containerd[1568]: 2025-10-13 05:56:47.215 [INFO][4902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" Namespace="kube-system" Pod="coredns-66bc5c9577-bp2xn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--bp2xn-eth0" Oct 13 05:56:47.254997 containerd[1568]: time="2025-10-13T05:56:47.254943660Z" level=info msg="connecting to shim 83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123" address="unix:///run/containerd/s/c22e833079fb18c15f1890b43404d7eb8cbb09f2e7576553299eccbb30676386" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:47.286328 systemd[1]: Started cri-containerd-83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123.scope - libcontainer container 83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123. Oct 13 05:56:47.313471 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:47.553816 containerd[1568]: time="2025-10-13T05:56:47.553759193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bp2xn,Uid:c20bd1d9-f9a0-458b-ba77-ccccb82b1f38,Namespace:kube-system,Attempt:0,} returns sandbox id \"83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123\"" Oct 13 05:56:47.560877 containerd[1568]: time="2025-10-13T05:56:47.560835204Z" level=info msg="CreateContainer within sandbox \"83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 13 05:56:47.573949 systemd-networkd[1493]: calif472a56164f: Gained IPv6LL Oct 13 05:56:47.576099 containerd[1568]: time="2025-10-13T05:56:47.576052134Z" level=info msg="Container 67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:47.584590 containerd[1568]: time="2025-10-13T05:56:47.584545816Z" level=info msg="CreateContainer within sandbox \"83b7d7d431b4b914e4c3692b235ff4d5e93ffe09f6c2ab2e8ac26d9a479ae123\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f\"" Oct 13 05:56:47.585404 containerd[1568]: time="2025-10-13T05:56:47.585364872Z" level=info msg="StartContainer for \"67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f\"" Oct 13 05:56:47.586673 containerd[1568]: time="2025-10-13T05:56:47.586635668Z" level=info msg="connecting to shim 67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f" address="unix:///run/containerd/s/c22e833079fb18c15f1890b43404d7eb8cbb09f2e7576553299eccbb30676386" protocol=ttrpc version=3 Oct 13 05:56:47.595967 containerd[1568]: time="2025-10-13T05:56:47.595882252Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:47.596863 containerd[1568]: time="2025-10-13T05:56:47.596838087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Oct 13 05:56:47.597984 containerd[1568]: time="2025-10-13T05:56:47.597941638Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:47.600990 containerd[1568]: time="2025-10-13T05:56:47.600953209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:47.601626 containerd[1568]: time="2025-10-13T05:56:47.601595845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.189655658s" Oct 13 05:56:47.601663 containerd[1568]: time="2025-10-13T05:56:47.601625381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Oct 13 05:56:47.605842 containerd[1568]: time="2025-10-13T05:56:47.605277305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:56:47.610629 containerd[1568]: time="2025-10-13T05:56:47.610587832Z" level=info msg="CreateContainer within sandbox \"34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Oct 13 05:56:47.614864 systemd[1]: Started cri-containerd-67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f.scope - libcontainer container 67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f. Oct 13 05:56:47.624541 containerd[1568]: time="2025-10-13T05:56:47.623558466Z" level=info msg="Container b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:47.964942 containerd[1568]: time="2025-10-13T05:56:47.964806238Z" level=info msg="StartContainer for \"67cd7a890e8bbf9e72ef1fdd23a2d543ba3229d786489f9d5c936cfbe88e525f\" returns successfully" Oct 13 05:56:47.971909 containerd[1568]: time="2025-10-13T05:56:47.971858685Z" level=info msg="CreateContainer within sandbox \"34c3e21624eb435b4310ae4b846615609cf40e78ba5ebf3941f289286217aff3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\"" Oct 13 05:56:47.972574 containerd[1568]: time="2025-10-13T05:56:47.972533882Z" level=info msg="StartContainer for \"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\"" Oct 13 05:56:47.975016 containerd[1568]: time="2025-10-13T05:56:47.974943635Z" level=info msg="connecting to shim b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9" address="unix:///run/containerd/s/146b2266be69d582a020f3b0fa9b6eb4c2c3c89e8b803b06e3538f88f75ef336" protocol=ttrpc version=3 Oct 13 05:56:48.003023 systemd[1]: Started cri-containerd-b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9.scope - libcontainer container b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9. Oct 13 05:56:48.023785 kubelet[2723]: I1013 05:56:48.021517 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bp2xn" podStartSLOduration=39.021495013 podStartE2EDuration="39.021495013s" podCreationTimestamp="2025-10-13 05:56:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:48.01900495 +0000 UTC m=+44.327962514" watchObservedRunningTime="2025-10-13 05:56:48.021495013 +0000 UTC m=+44.330452577" Oct 13 05:56:48.141386 containerd[1568]: time="2025-10-13T05:56:48.141327828Z" level=info msg="StartContainer for \"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\" returns successfully" Oct 13 05:56:48.533274 systemd-networkd[1493]: cali79bbe4bf328: Gained IPv6LL Oct 13 05:56:49.028528 kubelet[2723]: I1013 05:56:49.028279 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-854f97d977-h4hz8" podStartSLOduration=24.608408378 podStartE2EDuration="29.028257609s" podCreationTimestamp="2025-10-13 05:56:20 +0000 UTC" firstStartedPulling="2025-10-13 05:56:43.183090657 +0000 UTC m=+39.492048221" lastFinishedPulling="2025-10-13 05:56:47.602939898 +0000 UTC m=+43.911897452" observedRunningTime="2025-10-13 05:56:49.013923068 +0000 UTC m=+45.322880642" watchObservedRunningTime="2025-10-13 05:56:49.028257609 +0000 UTC m=+45.337215173" Oct 13 05:56:50.003368 kubelet[2723]: I1013 05:56:50.003310 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:50.770358 containerd[1568]: time="2025-10-13T05:56:50.770299235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\" id:\"2804616036ddc8834791f522fe86d9af0a6f28b619e7cfcfdaff7b0b561f50b2\" pid:5087 exit_status:1 exited_at:{seconds:1760335010 nanos:769643936}" Oct 13 05:56:50.881725 containerd[1568]: time="2025-10-13T05:56:50.881620604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\" id:\"bc13f284e02a2daf338b435eb50be41109fb5893ca67201cca1f9e2d94feaf65\" pid:5111 exit_status:1 exited_at:{seconds:1760335010 nanos:881117660}" Oct 13 05:56:51.113143 containerd[1568]: time="2025-10-13T05:56:51.113099008Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\" id:\"5f32b7edd647bfe4423bc00745fc030deba96bead4d3608cde1190fb1e6c40d6\" pid:5136 exit_status:1 exited_at:{seconds:1760335011 nanos:112780972}" Oct 13 05:56:51.452528 containerd[1568]: time="2025-10-13T05:56:51.452368013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:51.453293 containerd[1568]: time="2025-10-13T05:56:51.453213920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Oct 13 05:56:51.454496 containerd[1568]: time="2025-10-13T05:56:51.454445892Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:51.456384 containerd[1568]: time="2025-10-13T05:56:51.456320731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:51.457140 containerd[1568]: time="2025-10-13T05:56:51.457095224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.851787862s" Oct 13 05:56:51.457140 containerd[1568]: time="2025-10-13T05:56:51.457126803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:56:51.458088 containerd[1568]: time="2025-10-13T05:56:51.458061537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:56:51.462114 containerd[1568]: time="2025-10-13T05:56:51.462083203Z" level=info msg="CreateContainer within sandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:56:51.471246 containerd[1568]: time="2025-10-13T05:56:51.471196584Z" level=info msg="Container 729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:51.479919 containerd[1568]: time="2025-10-13T05:56:51.479858328Z" level=info msg="CreateContainer within sandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\"" Oct 13 05:56:51.480752 containerd[1568]: time="2025-10-13T05:56:51.480720926Z" level=info msg="StartContainer for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\"" Oct 13 05:56:51.481807 containerd[1568]: time="2025-10-13T05:56:51.481754244Z" level=info msg="connecting to shim 729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022" address="unix:///run/containerd/s/52d4596288f1ff0e3ad35d2b2ca3d3dfd7f96cab999ee6bff3fd415ee13803ca" protocol=ttrpc version=3 Oct 13 05:56:51.495793 systemd[1]: Started sshd@8-10.0.0.145:22-10.0.0.1:49452.service - OpenSSH per-connection server daemon (10.0.0.1:49452). Oct 13 05:56:51.508884 systemd[1]: Started cri-containerd-729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022.scope - libcontainer container 729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022. Oct 13 05:56:51.563027 containerd[1568]: time="2025-10-13T05:56:51.562983857Z" level=info msg="StartContainer for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" returns successfully" Oct 13 05:56:51.576730 sshd[5161]: Accepted publickey for core from 10.0.0.1 port 49452 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:56:51.578603 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:51.584209 systemd-logind[1543]: New session 9 of user core. Oct 13 05:56:51.593264 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 13 05:56:51.736586 sshd[5185]: Connection closed by 10.0.0.1 port 49452 Oct 13 05:56:51.736927 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:51.744464 systemd[1]: sshd@8-10.0.0.145:22-10.0.0.1:49452.service: Deactivated successfully. Oct 13 05:56:51.747142 systemd[1]: session-9.scope: Deactivated successfully. Oct 13 05:56:51.748197 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Oct 13 05:56:51.749785 systemd-logind[1543]: Removed session 9. Oct 13 05:56:51.853063 containerd[1568]: time="2025-10-13T05:56:51.852985856Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:51.853677 containerd[1568]: time="2025-10-13T05:56:51.853627820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:56:51.855285 containerd[1568]: time="2025-10-13T05:56:51.855243021Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 397.157569ms" Oct 13 05:56:51.855285 containerd[1568]: time="2025-10-13T05:56:51.855282345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:56:51.857716 containerd[1568]: time="2025-10-13T05:56:51.856360187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Oct 13 05:56:51.862054 containerd[1568]: time="2025-10-13T05:56:51.862018625Z" level=info msg="CreateContainer within sandbox \"84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:56:51.871886 containerd[1568]: time="2025-10-13T05:56:51.870156976Z" level=info msg="Container d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:51.897824 containerd[1568]: time="2025-10-13T05:56:51.897765161Z" level=info msg="CreateContainer within sandbox \"84acc4b15243c7a9c15e62394c5d8343efa910aceb4888b223a1a05518609e51\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a\"" Oct 13 05:56:51.898899 containerd[1568]: time="2025-10-13T05:56:51.898569560Z" level=info msg="StartContainer for \"d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a\"" Oct 13 05:56:51.900149 containerd[1568]: time="2025-10-13T05:56:51.900100283Z" level=info msg="connecting to shim d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a" address="unix:///run/containerd/s/e3c24e3eae3a061f0b7a5626329d1c0951959d45b51e62b66b11d49e877be6cd" protocol=ttrpc version=3 Oct 13 05:56:51.927952 systemd[1]: Started cri-containerd-d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a.scope - libcontainer container d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a. Oct 13 05:56:51.981506 containerd[1568]: time="2025-10-13T05:56:51.981460710Z" level=info msg="StartContainer for \"d3eef9eb7b75b916d62dacc43efd13f9b82632ca936a1fbb266f55c3702c283a\" returns successfully" Oct 13 05:56:52.032246 kubelet[2723]: I1013 05:56:52.031280 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-657947797c-8j6v8" podStartSLOduration=26.439742363 podStartE2EDuration="34.031257728s" podCreationTimestamp="2025-10-13 05:56:18 +0000 UTC" firstStartedPulling="2025-10-13 05:56:44.264661919 +0000 UTC m=+40.573619483" lastFinishedPulling="2025-10-13 05:56:51.856177284 +0000 UTC m=+48.165134848" observedRunningTime="2025-10-13 05:56:52.029275007 +0000 UTC m=+48.338232571" watchObservedRunningTime="2025-10-13 05:56:52.031257728 +0000 UTC m=+48.340215302" Oct 13 05:56:52.365775 containerd[1568]: time="2025-10-13T05:56:52.363739297Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:52.365775 containerd[1568]: time="2025-10-13T05:56:52.364314266Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Oct 13 05:56:52.369222 containerd[1568]: time="2025-10-13T05:56:52.369175126Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 512.777318ms" Oct 13 05:56:52.369222 containerd[1568]: time="2025-10-13T05:56:52.369208059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Oct 13 05:56:52.370699 containerd[1568]: time="2025-10-13T05:56:52.370641528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Oct 13 05:56:52.376185 containerd[1568]: time="2025-10-13T05:56:52.376145796Z" level=info msg="CreateContainer within sandbox \"a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:56:52.388998 containerd[1568]: time="2025-10-13T05:56:52.388849663Z" level=info msg="Container 120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:52.399574 containerd[1568]: time="2025-10-13T05:56:52.399497602Z" level=info msg="CreateContainer within sandbox \"a913f52105e459fe0d1d99b0ddc9d68412875b9dd2cc0485775edbe6e8a4fd73\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122\"" Oct 13 05:56:52.401845 containerd[1568]: time="2025-10-13T05:56:52.401797767Z" level=info msg="StartContainer for \"120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122\"" Oct 13 05:56:52.404077 containerd[1568]: time="2025-10-13T05:56:52.404026369Z" level=info msg="connecting to shim 120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122" address="unix:///run/containerd/s/3806799da3d41f85de0094691acab9265e64808a4ec08e8ee87ed050bcb7e67b" protocol=ttrpc version=3 Oct 13 05:56:52.435189 systemd[1]: Started cri-containerd-120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122.scope - libcontainer container 120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122. Oct 13 05:56:52.502714 containerd[1568]: time="2025-10-13T05:56:52.502288442Z" level=info msg="StartContainer for \"120dbbd82c2e2b6d47a731d2e9240de3c8525ebd9fd44d3451b45679f3f80122\" returns successfully" Oct 13 05:56:53.027674 kubelet[2723]: I1013 05:56:53.027147 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:53.047946 kubelet[2723]: I1013 05:56:53.028364 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:53.122061 kubelet[2723]: I1013 05:56:53.121959 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-657947797c-kdvwv" podStartSLOduration=27.796928785 podStartE2EDuration="35.121937967s" podCreationTimestamp="2025-10-13 05:56:18 +0000 UTC" firstStartedPulling="2025-10-13 05:56:44.132919876 +0000 UTC m=+40.441877440" lastFinishedPulling="2025-10-13 05:56:51.457929058 +0000 UTC m=+47.766886622" observedRunningTime="2025-10-13 05:56:52.048803039 +0000 UTC m=+48.357760603" watchObservedRunningTime="2025-10-13 05:56:53.121937967 +0000 UTC m=+49.430895531" Oct 13 05:56:54.720762 kubelet[2723]: I1013 05:56:54.719081 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd748bcb9-d4vnc" podStartSLOduration=28.712760678 podStartE2EDuration="36.719061597s" podCreationTimestamp="2025-10-13 05:56:18 +0000 UTC" firstStartedPulling="2025-10-13 05:56:44.36385587 +0000 UTC m=+40.672813434" lastFinishedPulling="2025-10-13 05:56:52.370156789 +0000 UTC m=+48.679114353" observedRunningTime="2025-10-13 05:56:53.121543687 +0000 UTC m=+49.430501271" watchObservedRunningTime="2025-10-13 05:56:54.719061597 +0000 UTC m=+51.028019161" Oct 13 05:56:54.860730 kubelet[2723]: I1013 05:56:54.860284 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:54.916852 containerd[1568]: time="2025-10-13T05:56:54.915163710Z" level=info msg="StopContainer for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" with timeout 30 (s)" Oct 13 05:56:54.920295 systemd[1]: Created slice kubepods-besteffort-pod6496becd_f80c_49cd_aa15_d57b311cdd14.slice - libcontainer container kubepods-besteffort-pod6496becd_f80c_49cd_aa15_d57b311cdd14.slice. Oct 13 05:56:54.934077 containerd[1568]: time="2025-10-13T05:56:54.934009728Z" level=info msg="Stop container \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" with signal terminated" Oct 13 05:56:54.963455 systemd[1]: cri-containerd-729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022.scope: Deactivated successfully. Oct 13 05:56:54.963836 systemd[1]: cri-containerd-729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022.scope: Consumed 1.269s CPU time, 41.7M memory peak. Oct 13 05:56:54.969276 containerd[1568]: time="2025-10-13T05:56:54.969091387Z" level=info msg="received exit event container_id:\"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" id:\"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" pid:5164 exit_status:1 exited_at:{seconds:1760335014 nanos:968313808}" Oct 13 05:56:54.969586 containerd[1568]: time="2025-10-13T05:56:54.969529780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" id:\"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" pid:5164 exit_status:1 exited_at:{seconds:1760335014 nanos:968313808}" Oct 13 05:56:55.003360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022-rootfs.mount: Deactivated successfully. Oct 13 05:56:55.053416 kubelet[2723]: I1013 05:56:55.053371 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6496becd-f80c-49cd-aa15-d57b311cdd14-calico-apiserver-certs\") pod \"calico-apiserver-7cd748bcb9-hpqkh\" (UID: \"6496becd-f80c-49cd-aa15-d57b311cdd14\") " pod="calico-apiserver/calico-apiserver-7cd748bcb9-hpqkh" Oct 13 05:56:55.053416 kubelet[2723]: I1013 05:56:55.053414 2723 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkljn\" (UniqueName: \"kubernetes.io/projected/6496becd-f80c-49cd-aa15-d57b311cdd14-kube-api-access-zkljn\") pod \"calico-apiserver-7cd748bcb9-hpqkh\" (UID: \"6496becd-f80c-49cd-aa15-d57b311cdd14\") " pod="calico-apiserver/calico-apiserver-7cd748bcb9-hpqkh" Oct 13 05:56:55.067901 containerd[1568]: time="2025-10-13T05:56:55.067824163Z" level=info msg="StopContainer for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" returns successfully" Oct 13 05:56:55.070589 containerd[1568]: time="2025-10-13T05:56:55.070549045Z" level=info msg="StopPodSandbox for \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\"" Oct 13 05:56:55.088720 containerd[1568]: time="2025-10-13T05:56:55.088655184Z" level=info msg="Container to stop \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 13 05:56:55.100190 systemd[1]: cri-containerd-91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84.scope: Deactivated successfully. Oct 13 05:56:55.107022 containerd[1568]: time="2025-10-13T05:56:55.106955007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" id:\"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" pid:4396 exit_status:137 exited_at:{seconds:1760335015 nanos:106008631}" Oct 13 05:56:55.142949 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84-rootfs.mount: Deactivated successfully. Oct 13 05:56:55.170139 containerd[1568]: time="2025-10-13T05:56:55.169603312Z" level=info msg="shim disconnected" id=91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84 namespace=k8s.io Oct 13 05:56:55.170139 containerd[1568]: time="2025-10-13T05:56:55.170083622Z" level=warning msg="cleaning up after shim disconnected" id=91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84 namespace=k8s.io Oct 13 05:56:55.170370 containerd[1568]: time="2025-10-13T05:56:55.170092369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 13 05:56:55.222635 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84-shm.mount: Deactivated successfully. Oct 13 05:56:55.227901 containerd[1568]: time="2025-10-13T05:56:55.227257697Z" level=info msg="received exit event sandbox_id:\"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" exit_status:137 exited_at:{seconds:1760335015 nanos:106008631}" Oct 13 05:56:55.235499 containerd[1568]: time="2025-10-13T05:56:55.234881049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd748bcb9-hpqkh,Uid:6496becd-f80c-49cd-aa15-d57b311cdd14,Namespace:calico-apiserver,Attempt:0,}" Oct 13 05:56:55.383746 systemd-networkd[1493]: cali632299b2d96: Link DOWN Oct 13 05:56:55.383760 systemd-networkd[1493]: cali632299b2d96: Lost carrier Oct 13 05:56:55.436047 systemd-networkd[1493]: cali32699729d18: Link UP Oct 13 05:56:55.436994 systemd-networkd[1493]: cali32699729d18: Gained carrier Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.280 [INFO][5370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0 calico-apiserver-7cd748bcb9- calico-apiserver 6496becd-f80c-49cd-aa15-d57b311cdd14 1093 0 2025-10-13 05:56:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd748bcb9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd748bcb9-hpqkh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali32699729d18 [] [] }} ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.281 [INFO][5370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.311 [INFO][5389] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" HandleID="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Workload="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.312 [INFO][5389] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" HandleID="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Workload="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd748bcb9-hpqkh", "timestamp":"2025-10-13 05:56:55.311707084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.312 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.312 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.312 [INFO][5389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.382 [INFO][5389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.391 [INFO][5389] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.399 [INFO][5389] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.402 [INFO][5389] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.406 [INFO][5389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.406 [INFO][5389] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.408 [INFO][5389] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.417 [INFO][5389] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.424 [INFO][5389] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.138/26] block=192.168.88.128/26 handle="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.424 [INFO][5389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.138/26] handle="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" host="localhost" Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.424 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:55.459227 containerd[1568]: 2025-10-13 05:56:55.424 [INFO][5389] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.138/26] IPv6=[] ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" HandleID="k8s-pod-network.0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Workload="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.459960 containerd[1568]: 2025-10-13 05:56:55.429 [INFO][5370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0", GenerateName:"calico-apiserver-7cd748bcb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6496becd-f80c-49cd-aa15-d57b311cdd14", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd748bcb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd748bcb9-hpqkh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32699729d18", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:55.459960 containerd[1568]: 2025-10-13 05:56:55.429 [INFO][5370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.138/32] ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.459960 containerd[1568]: 2025-10-13 05:56:55.429 [INFO][5370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32699729d18 ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.459960 containerd[1568]: 2025-10-13 05:56:55.437 [INFO][5370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.459960 containerd[1568]: 2025-10-13 05:56:55.438 [INFO][5370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0", GenerateName:"calico-apiserver-7cd748bcb9-", Namespace:"calico-apiserver", SelfLink:"", UID:"6496becd-f80c-49cd-aa15-d57b311cdd14", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.October, 13, 5, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd748bcb9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef", Pod:"calico-apiserver-7cd748bcb9-hpqkh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali32699729d18", MAC:"92:49:de:e2:7b:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 13 05:56:55.459960 containerd[1568]: 2025-10-13 05:56:55.450 [INFO][5370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" Namespace="calico-apiserver" Pod="calico-apiserver-7cd748bcb9-hpqkh" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd748bcb9--hpqkh-eth0" Oct 13 05:56:55.491416 containerd[1568]: time="2025-10-13T05:56:55.491308305Z" level=info msg="connecting to shim 0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef" address="unix:///run/containerd/s/c865da6d1a7090d4395542485c8388b499c3e8d71b6585d094a5861464c7354c" namespace=k8s.io protocol=ttrpc version=3 Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.381 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.381 [INFO][5378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" iface="eth0" netns="/var/run/netns/cni-3ad70cbf-26b0-ff48-705a-b7f39944682f" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.381 [INFO][5378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" iface="eth0" netns="/var/run/netns/cni-3ad70cbf-26b0-ff48-705a-b7f39944682f" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.392 [INFO][5378] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" after=10.63923ms iface="eth0" netns="/var/run/netns/cni-3ad70cbf-26b0-ff48-705a-b7f39944682f" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.392 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.392 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.429 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.429 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.429 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.508 [INFO][5403] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.508 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.510 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:56:55.526110 containerd[1568]: 2025-10-13 05:56:55.520 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:56:55.531302 containerd[1568]: time="2025-10-13T05:56:55.531237656Z" level=info msg="TearDown network for sandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" successfully" Oct 13 05:56:55.531302 containerd[1568]: time="2025-10-13T05:56:55.531300895Z" level=info msg="StopPodSandbox for \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" returns successfully" Oct 13 05:56:55.541087 systemd[1]: Started cri-containerd-0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef.scope - libcontainer container 0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef. Oct 13 05:56:55.597889 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 13 05:56:55.657751 kubelet[2723]: I1013 05:56:55.657465 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12495c87-6d31-4985-8605-8cf9404cc207-calico-apiserver-certs\") pod \"12495c87-6d31-4985-8605-8cf9404cc207\" (UID: \"12495c87-6d31-4985-8605-8cf9404cc207\") " Oct 13 05:56:55.657751 kubelet[2723]: I1013 05:56:55.657534 2723 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmdpc\" (UniqueName: \"kubernetes.io/projected/12495c87-6d31-4985-8605-8cf9404cc207-kube-api-access-xmdpc\") pod \"12495c87-6d31-4985-8605-8cf9404cc207\" (UID: \"12495c87-6d31-4985-8605-8cf9404cc207\") " Oct 13 05:56:55.699357 kubelet[2723]: I1013 05:56:55.699303 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12495c87-6d31-4985-8605-8cf9404cc207-kube-api-access-xmdpc" (OuterVolumeSpecName: "kube-api-access-xmdpc") pod "12495c87-6d31-4985-8605-8cf9404cc207" (UID: "12495c87-6d31-4985-8605-8cf9404cc207"). InnerVolumeSpecName "kube-api-access-xmdpc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 13 05:56:55.700618 kubelet[2723]: I1013 05:56:55.700588 2723 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12495c87-6d31-4985-8605-8cf9404cc207-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "12495c87-6d31-4985-8605-8cf9404cc207" (UID: "12495c87-6d31-4985-8605-8cf9404cc207"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 13 05:56:55.701795 containerd[1568]: time="2025-10-13T05:56:55.701747000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd748bcb9-hpqkh,Uid:6496becd-f80c-49cd-aa15-d57b311cdd14,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef\"" Oct 13 05:56:55.709495 containerd[1568]: time="2025-10-13T05:56:55.709453898Z" level=info msg="CreateContainer within sandbox \"0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Oct 13 05:56:55.719044 containerd[1568]: time="2025-10-13T05:56:55.719005218Z" level=info msg="Container 579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:55.728949 containerd[1568]: time="2025-10-13T05:56:55.728893680Z" level=info msg="CreateContainer within sandbox \"0e1421bf47d9eac909f7e7e2c53986d3f9a4dcc140fdb98bee2045e7e27a25ef\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517\"" Oct 13 05:56:55.729738 containerd[1568]: time="2025-10-13T05:56:55.729673162Z" level=info msg="StartContainer for \"579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517\"" Oct 13 05:56:55.732343 containerd[1568]: time="2025-10-13T05:56:55.732284161Z" level=info msg="connecting to shim 579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517" address="unix:///run/containerd/s/c865da6d1a7090d4395542485c8388b499c3e8d71b6585d094a5861464c7354c" protocol=ttrpc version=3 Oct 13 05:56:55.761015 kubelet[2723]: I1013 05:56:55.758390 2723 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/12495c87-6d31-4985-8605-8cf9404cc207-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:55.761015 kubelet[2723]: I1013 05:56:55.758432 2723 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xmdpc\" (UniqueName: \"kubernetes.io/projected/12495c87-6d31-4985-8605-8cf9404cc207-kube-api-access-xmdpc\") on node \"localhost\" DevicePath \"\"" Oct 13 05:56:55.787384 systemd[1]: Started cri-containerd-579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517.scope - libcontainer container 579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517. Oct 13 05:56:55.801606 systemd[1]: Removed slice kubepods-besteffort-pod12495c87_6d31_4985_8605_8cf9404cc207.slice - libcontainer container kubepods-besteffort-pod12495c87_6d31_4985_8605_8cf9404cc207.slice. Oct 13 05:56:55.801722 systemd[1]: kubepods-besteffort-pod12495c87_6d31_4985_8605_8cf9404cc207.slice: Consumed 1.300s CPU time, 42M memory peak. Oct 13 05:56:55.989857 containerd[1568]: time="2025-10-13T05:56:55.989681886Z" level=info msg="StartContainer for \"579f96f6df08551f67b043f6d5307195c35d3c056e647367fae66cc1eb1b6517\" returns successfully" Oct 13 05:56:56.006257 systemd[1]: run-netns-cni\x2d3ad70cbf\x2d26b0\x2dff48\x2d705a\x2db7f39944682f.mount: Deactivated successfully. Oct 13 05:56:56.006386 systemd[1]: var-lib-kubelet-pods-12495c87\x2d6d31\x2d4985\x2d8605\x2d8cf9404cc207-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxmdpc.mount: Deactivated successfully. Oct 13 05:56:56.006475 systemd[1]: var-lib-kubelet-pods-12495c87\x2d6d31\x2d4985\x2d8605\x2d8cf9404cc207-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Oct 13 05:56:56.024653 containerd[1568]: time="2025-10-13T05:56:56.024585718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:56.025874 containerd[1568]: time="2025-10-13T05:56:56.025463795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Oct 13 05:56:56.026797 containerd[1568]: time="2025-10-13T05:56:56.026747683Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:56.028842 containerd[1568]: time="2025-10-13T05:56:56.028792900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:56.029407 containerd[1568]: time="2025-10-13T05:56:56.029309319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.658162013s" Oct 13 05:56:56.029407 containerd[1568]: time="2025-10-13T05:56:56.029353312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Oct 13 05:56:56.032008 containerd[1568]: time="2025-10-13T05:56:56.031897574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Oct 13 05:56:56.129777 kubelet[2723]: I1013 05:56:56.129678 2723 scope.go:117] "RemoveContainer" containerID="729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022" Oct 13 05:56:56.132702 containerd[1568]: time="2025-10-13T05:56:56.132396545Z" level=info msg="RemoveContainer for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\"" Oct 13 05:56:56.226881 containerd[1568]: time="2025-10-13T05:56:56.226821539Z" level=info msg="CreateContainer within sandbox \"514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Oct 13 05:56:56.245178 containerd[1568]: time="2025-10-13T05:56:56.243903466Z" level=info msg="Container 39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:56.251710 kubelet[2723]: I1013 05:56:56.251208 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd748bcb9-hpqkh" podStartSLOduration=2.251150291 podStartE2EDuration="2.251150291s" podCreationTimestamp="2025-10-13 05:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-13 05:56:56.246260507 +0000 UTC m=+52.555218081" watchObservedRunningTime="2025-10-13 05:56:56.251150291 +0000 UTC m=+52.560107875" Oct 13 05:56:56.252678 containerd[1568]: time="2025-10-13T05:56:56.252577980Z" level=info msg="RemoveContainer for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" returns successfully" Oct 13 05:56:56.253149 kubelet[2723]: I1013 05:56:56.253089 2723 scope.go:117] "RemoveContainer" containerID="729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022" Oct 13 05:56:56.254224 containerd[1568]: time="2025-10-13T05:56:56.254184673Z" level=error msg="ContainerStatus for \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\": not found" Oct 13 05:56:56.254579 kubelet[2723]: E1013 05:56:56.254557 2723 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\": not found" containerID="729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022" Oct 13 05:56:56.254735 kubelet[2723]: I1013 05:56:56.254664 2723 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022"} err="failed to get container status \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\": rpc error: code = NotFound desc = an error occurred when try to find container \"729045f0dd9356e11f932a11bbca6309847b095c15435c3d375f7e00c1a94022\": not found" Oct 13 05:56:56.263169 containerd[1568]: time="2025-10-13T05:56:56.263123103Z" level=info msg="CreateContainer within sandbox \"514b5ad9f1272e81be2abe9d9e667e70fe8857364a7ebce270b3d1fa99a21238\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85\"" Oct 13 05:56:56.264856 containerd[1568]: time="2025-10-13T05:56:56.264752769Z" level=info msg="StartContainer for \"39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85\"" Oct 13 05:56:56.268821 containerd[1568]: time="2025-10-13T05:56:56.268789012Z" level=info msg="connecting to shim 39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85" address="unix:///run/containerd/s/3aaa9375ac90ef093355d0042fff854d066d690dd547ec5e6541821739cef2d7" protocol=ttrpc version=3 Oct 13 05:56:56.304889 systemd[1]: Started cri-containerd-39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85.scope - libcontainer container 39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85. Oct 13 05:56:56.370153 containerd[1568]: time="2025-10-13T05:56:56.370051284Z" level=info msg="StartContainer for \"39a4b31b82d70a8747a67335baf85873ce92296d550a09fd5e2d645774382e85\" returns successfully" Oct 13 05:56:56.758844 systemd[1]: Started sshd@9-10.0.0.145:22-10.0.0.1:38896.service - OpenSSH per-connection server daemon (10.0.0.1:38896). Oct 13 05:56:56.837957 sshd[5552]: Accepted publickey for core from 10.0.0.1 port 38896 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:56:56.840257 sshd-session[5552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:56:56.845509 systemd-logind[1543]: New session 10 of user core. Oct 13 05:56:56.852953 systemd-networkd[1493]: cali32699729d18: Gained IPv6LL Oct 13 05:56:56.854839 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 13 05:56:56.998832 sshd[5555]: Connection closed by 10.0.0.1 port 38896 Oct 13 05:56:56.999194 sshd-session[5552]: pam_unix(sshd:session): session closed for user core Oct 13 05:56:57.004019 systemd[1]: sshd@9-10.0.0.145:22-10.0.0.1:38896.service: Deactivated successfully. Oct 13 05:56:57.006720 systemd[1]: session-10.scope: Deactivated successfully. Oct 13 05:56:57.007932 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Oct 13 05:56:57.009545 systemd-logind[1543]: Removed session 10. Oct 13 05:56:57.143633 kubelet[2723]: I1013 05:56:57.143599 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:56:57.152361 kubelet[2723]: I1013 05:56:57.152275 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f99db6c64-kl67n" podStartSLOduration=3.038649272 podStartE2EDuration="16.15225661s" podCreationTimestamp="2025-10-13 05:56:41 +0000 UTC" firstStartedPulling="2025-10-13 05:56:42.917946422 +0000 UTC m=+39.226903986" lastFinishedPulling="2025-10-13 05:56:56.03155376 +0000 UTC m=+52.340511324" observedRunningTime="2025-10-13 05:56:57.151570032 +0000 UTC m=+53.460527596" watchObservedRunningTime="2025-10-13 05:56:57.15225661 +0000 UTC m=+53.461214174" Oct 13 05:56:57.449567 containerd[1568]: time="2025-10-13T05:56:57.449503807Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:57.450335 containerd[1568]: time="2025-10-13T05:56:57.450284011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Oct 13 05:56:57.451393 containerd[1568]: time="2025-10-13T05:56:57.451358336Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:57.453494 containerd[1568]: time="2025-10-13T05:56:57.453449669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:56:57.454070 containerd[1568]: time="2025-10-13T05:56:57.454045678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.422121184s" Oct 13 05:56:57.454146 containerd[1568]: time="2025-10-13T05:56:57.454072769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Oct 13 05:56:57.455637 containerd[1568]: time="2025-10-13T05:56:57.455578293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Oct 13 05:56:57.460367 containerd[1568]: time="2025-10-13T05:56:57.460317073Z" level=info msg="CreateContainer within sandbox \"a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Oct 13 05:56:57.473502 containerd[1568]: time="2025-10-13T05:56:57.473433539Z" level=info msg="Container be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:56:57.495716 containerd[1568]: time="2025-10-13T05:56:57.495635930Z" level=info msg="CreateContainer within sandbox \"a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f\"" Oct 13 05:56:57.496704 containerd[1568]: time="2025-10-13T05:56:57.496621028Z" level=info msg="StartContainer for \"be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f\"" Oct 13 05:56:57.498474 containerd[1568]: time="2025-10-13T05:56:57.498268479Z" level=info msg="connecting to shim be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f" address="unix:///run/containerd/s/950c4c85285508c3a1e1834b53182816460bd9394259742dad7638263c0d4283" protocol=ttrpc version=3 Oct 13 05:56:57.533913 systemd[1]: Started cri-containerd-be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f.scope - libcontainer container be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f. Oct 13 05:56:57.592908 containerd[1568]: time="2025-10-13T05:56:57.592839291Z" level=info msg="StartContainer for \"be3b22666c56ffae7ba1d3d5c67b7e02b7de0f8f7559568c6c64c1a98c5a017f\" returns successfully" Oct 13 05:56:57.790236 kubelet[2723]: I1013 05:56:57.790180 2723 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12495c87-6d31-4985-8605-8cf9404cc207" path="/var/lib/kubelet/pods/12495c87-6d31-4985-8605-8cf9404cc207/volumes" Oct 13 05:56:58.495071 containerd[1568]: time="2025-10-13T05:56:58.495004608Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\" id:\"8fe93f5fdece55d461263ea43e48513a0e474c6d92317d747ff8c444fc86b9bb\" pid:5617 exited_at:{seconds:1760335018 nanos:494533403}" Oct 13 05:57:00.232405 containerd[1568]: time="2025-10-13T05:57:00.232318622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:57:00.234131 containerd[1568]: time="2025-10-13T05:57:00.233905058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Oct 13 05:57:00.235785 containerd[1568]: time="2025-10-13T05:57:00.235705506Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:57:00.242339 containerd[1568]: time="2025-10-13T05:57:00.242269949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:57:00.243000 containerd[1568]: time="2025-10-13T05:57:00.242951687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.787330183s" Oct 13 05:57:00.243000 containerd[1568]: time="2025-10-13T05:57:00.242990891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Oct 13 05:57:00.244265 containerd[1568]: time="2025-10-13T05:57:00.244216971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Oct 13 05:57:00.257281 containerd[1568]: time="2025-10-13T05:57:00.257201738Z" level=info msg="CreateContainer within sandbox \"7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Oct 13 05:57:00.273438 containerd[1568]: time="2025-10-13T05:57:00.273375006Z" level=info msg="Container a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:57:00.284638 containerd[1568]: time="2025-10-13T05:57:00.284571829Z" level=info msg="CreateContainer within sandbox \"7fce909bb22568e99c9f4504dc4d9684d11b858fc5ccd3adb7f64e93db67465d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c\"" Oct 13 05:57:00.285635 containerd[1568]: time="2025-10-13T05:57:00.285589929Z" level=info msg="StartContainer for \"a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c\"" Oct 13 05:57:00.286958 containerd[1568]: time="2025-10-13T05:57:00.286894696Z" level=info msg="connecting to shim a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c" address="unix:///run/containerd/s/f19ec09049e71c8abc129f7818eeee34adb75d9c4764a8b939c00b07eba2448d" protocol=ttrpc version=3 Oct 13 05:57:00.328925 systemd[1]: Started cri-containerd-a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c.scope - libcontainer container a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c. Oct 13 05:57:00.386187 containerd[1568]: time="2025-10-13T05:57:00.385675629Z" level=info msg="StartContainer for \"a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c\" returns successfully" Oct 13 05:57:01.590678 kubelet[2723]: I1013 05:57:01.590599 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7948465b6c-q68c8" podStartSLOduration=27.823648444 podStartE2EDuration="41.590556436s" podCreationTimestamp="2025-10-13 05:56:20 +0000 UTC" firstStartedPulling="2025-10-13 05:56:46.477201187 +0000 UTC m=+42.786158751" lastFinishedPulling="2025-10-13 05:57:00.244109179 +0000 UTC m=+56.553066743" observedRunningTime="2025-10-13 05:57:01.589872392 +0000 UTC m=+57.898829956" watchObservedRunningTime="2025-10-13 05:57:01.590556436 +0000 UTC m=+57.899514000" Oct 13 05:57:01.610200 containerd[1568]: time="2025-10-13T05:57:01.610134119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c\" id:\"07d25b52efed3f39744d615f6cabe9353f3bdc19d4b0ea8985dfc046417f67b6\" pid:5692 exited_at:{seconds:1760335021 nanos:609829407}" Oct 13 05:57:02.015849 systemd[1]: Started sshd@10-10.0.0.145:22-10.0.0.1:44638.service - OpenSSH per-connection server daemon (10.0.0.1:44638). Oct 13 05:57:02.272624 sshd[5703]: Accepted publickey for core from 10.0.0.1 port 44638 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:02.274312 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:02.278974 systemd-logind[1543]: New session 11 of user core. Oct 13 05:57:02.286854 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 13 05:57:02.597192 sshd[5706]: Connection closed by 10.0.0.1 port 44638 Oct 13 05:57:02.598251 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:02.609873 systemd[1]: sshd@10-10.0.0.145:22-10.0.0.1:44638.service: Deactivated successfully. Oct 13 05:57:02.612537 systemd[1]: session-11.scope: Deactivated successfully. Oct 13 05:57:02.615085 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Oct 13 05:57:02.618682 systemd[1]: Started sshd@11-10.0.0.145:22-10.0.0.1:44644.service - OpenSSH per-connection server daemon (10.0.0.1:44644). Oct 13 05:57:02.619377 systemd-logind[1543]: Removed session 11. Oct 13 05:57:02.676923 sshd[5720]: Accepted publickey for core from 10.0.0.1 port 44644 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:02.678212 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:02.682572 systemd-logind[1543]: New session 12 of user core. Oct 13 05:57:02.691837 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 13 05:57:02.842820 sshd[5723]: Connection closed by 10.0.0.1 port 44644 Oct 13 05:57:02.843259 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:02.852950 systemd[1]: sshd@11-10.0.0.145:22-10.0.0.1:44644.service: Deactivated successfully. Oct 13 05:57:02.857484 systemd[1]: session-12.scope: Deactivated successfully. Oct 13 05:57:02.859062 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Oct 13 05:57:02.864545 systemd[1]: Started sshd@12-10.0.0.145:22-10.0.0.1:44648.service - OpenSSH per-connection server daemon (10.0.0.1:44648). Oct 13 05:57:02.867923 systemd-logind[1543]: Removed session 12. Oct 13 05:57:02.920105 sshd[5736]: Accepted publickey for core from 10.0.0.1 port 44648 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:02.921752 sshd-session[5736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:02.926337 systemd-logind[1543]: New session 13 of user core. Oct 13 05:57:02.935846 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 13 05:57:03.086349 sshd[5739]: Connection closed by 10.0.0.1 port 44648 Oct 13 05:57:03.086840 sshd-session[5736]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:03.091707 systemd[1]: sshd@12-10.0.0.145:22-10.0.0.1:44648.service: Deactivated successfully. Oct 13 05:57:03.093795 systemd[1]: session-13.scope: Deactivated successfully. Oct 13 05:57:03.094630 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Oct 13 05:57:03.096463 systemd-logind[1543]: Removed session 13. Oct 13 05:57:03.786419 containerd[1568]: time="2025-10-13T05:57:03.785197304Z" level=info msg="StopPodSandbox for \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\"" Oct 13 05:57:04.008675 containerd[1568]: time="2025-10-13T05:57:04.008608777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:57:04.009538 containerd[1568]: time="2025-10-13T05:57:04.009508771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Oct 13 05:57:04.011162 containerd[1568]: time="2025-10-13T05:57:04.011122928Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:57:04.013380 containerd[1568]: time="2025-10-13T05:57:04.013347131Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 13 05:57:04.014242 containerd[1568]: time="2025-10-13T05:57:04.014197679Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.769951072s" Oct 13 05:57:04.014305 containerd[1568]: time="2025-10-13T05:57:04.014242726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Oct 13 05:57:04.020650 containerd[1568]: time="2025-10-13T05:57:04.020596430Z" level=info msg="CreateContainer within sandbox \"a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Oct 13 05:57:04.035388 containerd[1568]: time="2025-10-13T05:57:04.035338297Z" level=info msg="Container d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87: CDI devices from CRI Config.CDIDevices: []" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:03.999 [WARNING][5774] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.001 [INFO][5774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.001 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" iface="eth0" netns="" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.001 [INFO][5774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.001 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.029 [INFO][5783] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.029 [INFO][5783] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.029 [INFO][5783] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.039 [WARNING][5783] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.039 [INFO][5783] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.040 [INFO][5783] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:57:04.049075 containerd[1568]: 2025-10-13 05:57:04.045 [INFO][5774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.049958 containerd[1568]: time="2025-10-13T05:57:04.049076300Z" level=info msg="TearDown network for sandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" successfully" Oct 13 05:57:04.049958 containerd[1568]: time="2025-10-13T05:57:04.049123100Z" level=info msg="StopPodSandbox for \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" returns successfully" Oct 13 05:57:04.050887 containerd[1568]: time="2025-10-13T05:57:04.050853362Z" level=info msg="RemovePodSandbox for \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\"" Oct 13 05:57:04.050944 containerd[1568]: time="2025-10-13T05:57:04.050907797Z" level=info msg="Forcibly stopping sandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\"" Oct 13 05:57:04.057485 containerd[1568]: time="2025-10-13T05:57:04.056885758Z" level=info msg="CreateContainer within sandbox \"a54145ffdd0dcff93782cf386df3a252d3f33bb0afd2271525c83bf684dfc9db\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87\"" Oct 13 05:57:04.058856 containerd[1568]: time="2025-10-13T05:57:04.058833228Z" level=info msg="StartContainer for \"d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87\"" Oct 13 05:57:04.061977 containerd[1568]: time="2025-10-13T05:57:04.061937175Z" level=info msg="connecting to shim d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87" address="unix:///run/containerd/s/950c4c85285508c3a1e1834b53182816460bd9394259742dad7638263c0d4283" protocol=ttrpc version=3 Oct 13 05:57:04.087850 systemd[1]: Started cri-containerd-d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87.scope - libcontainer container d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87. Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.095 [WARNING][5802] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" WorkloadEndpoint="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.095 [INFO][5802] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.095 [INFO][5802] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" iface="eth0" netns="" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.095 [INFO][5802] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.095 [INFO][5802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.117 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.117 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.118 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.123 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.123 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" HandleID="k8s-pod-network.91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Workload="localhost-k8s-calico--apiserver--657947797c--kdvwv-eth0" Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.125 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Oct 13 05:57:04.131833 containerd[1568]: 2025-10-13 05:57:04.127 [INFO][5802] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84" Oct 13 05:57:04.132214 containerd[1568]: time="2025-10-13T05:57:04.131888570Z" level=info msg="TearDown network for sandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" successfully" Oct 13 05:57:04.142236 containerd[1568]: time="2025-10-13T05:57:04.142193814Z" level=info msg="StartContainer for \"d7c18ff2c277965f36088b0b1125a67f49a7364c4f302680eb57c8b294a20d87\" returns successfully" Oct 13 05:57:04.144342 containerd[1568]: time="2025-10-13T05:57:04.143851556Z" level=info msg="Ensure that sandbox 91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84 in task-service has been cleanup successfully" Oct 13 05:57:04.147747 containerd[1568]: time="2025-10-13T05:57:04.147725025Z" level=info msg="RemovePodSandbox \"91b22be699bccb0949495c08d39431df480221c34d4baeb475299efcc63fbd84\" returns successfully" Oct 13 05:57:04.781765 kubelet[2723]: I1013 05:57:04.781675 2723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8p5bx" podStartSLOduration=26.385474162 podStartE2EDuration="44.781656231s" podCreationTimestamp="2025-10-13 05:56:20 +0000 UTC" firstStartedPulling="2025-10-13 05:56:45.618915243 +0000 UTC m=+41.927872807" lastFinishedPulling="2025-10-13 05:57:04.015097312 +0000 UTC m=+60.324054876" observedRunningTime="2025-10-13 05:57:04.781256271 +0000 UTC m=+61.090213835" watchObservedRunningTime="2025-10-13 05:57:04.781656231 +0000 UTC m=+61.090613795" Oct 13 05:57:04.890587 kubelet[2723]: I1013 05:57:04.890527 2723 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Oct 13 05:57:04.890587 kubelet[2723]: I1013 05:57:04.890567 2723 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Oct 13 05:57:08.104284 systemd[1]: Started sshd@13-10.0.0.145:22-10.0.0.1:44650.service - OpenSSH per-connection server daemon (10.0.0.1:44650). Oct 13 05:57:08.173000 sshd[5855]: Accepted publickey for core from 10.0.0.1 port 44650 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:08.174915 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:08.179509 systemd-logind[1543]: New session 14 of user core. Oct 13 05:57:08.187834 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 13 05:57:08.328413 sshd[5858]: Connection closed by 10.0.0.1 port 44650 Oct 13 05:57:08.329015 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:08.332893 systemd[1]: sshd@13-10.0.0.145:22-10.0.0.1:44650.service: Deactivated successfully. Oct 13 05:57:08.335008 systemd[1]: session-14.scope: Deactivated successfully. Oct 13 05:57:08.337301 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Oct 13 05:57:08.338328 systemd-logind[1543]: Removed session 14. Oct 13 05:57:13.344745 systemd[1]: Started sshd@14-10.0.0.145:22-10.0.0.1:37678.service - OpenSSH per-connection server daemon (10.0.0.1:37678). Oct 13 05:57:13.405578 sshd[5881]: Accepted publickey for core from 10.0.0.1 port 37678 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:13.407321 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:13.411786 systemd-logind[1543]: New session 15 of user core. Oct 13 05:57:13.422806 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 13 05:57:13.587941 sshd[5884]: Connection closed by 10.0.0.1 port 37678 Oct 13 05:57:13.588278 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:13.593905 systemd[1]: sshd@14-10.0.0.145:22-10.0.0.1:37678.service: Deactivated successfully. Oct 13 05:57:13.595899 systemd[1]: session-15.scope: Deactivated successfully. Oct 13 05:57:13.596899 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Oct 13 05:57:13.598063 systemd-logind[1543]: Removed session 15. Oct 13 05:57:16.180034 containerd[1568]: time="2025-10-13T05:57:16.179978730Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a10e5765918ba9d5bd2f55abb28ab588ce9feee33269e34c5e7c207adc3b9f46\" id:\"06075e6a268ffb0a5fef8ef465d60276dba0bc6bb376b34d39108f7ce898b43c\" pid:5911 exited_at:{seconds:1760335036 nanos:179602821}" Oct 13 05:57:18.612634 systemd[1]: Started sshd@15-10.0.0.145:22-10.0.0.1:37692.service - OpenSSH per-connection server daemon (10.0.0.1:37692). Oct 13 05:57:18.686346 sshd[5924]: Accepted publickey for core from 10.0.0.1 port 37692 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:18.688468 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:18.693761 systemd-logind[1543]: New session 16 of user core. Oct 13 05:57:18.702946 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 13 05:57:18.879409 sshd[5927]: Connection closed by 10.0.0.1 port 37692 Oct 13 05:57:18.880922 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:18.885340 systemd[1]: sshd@15-10.0.0.145:22-10.0.0.1:37692.service: Deactivated successfully. Oct 13 05:57:18.887295 systemd[1]: session-16.scope: Deactivated successfully. Oct 13 05:57:18.888117 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Oct 13 05:57:18.889323 systemd-logind[1543]: Removed session 16. Oct 13 05:57:18.986961 kubelet[2723]: I1013 05:57:18.986905 2723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 13 05:57:21.097856 containerd[1568]: time="2025-10-13T05:57:21.097790810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5cd00e677055bd6c0b766403c064696cd574395e828397ac7b211da80e408f9\" id:\"e4fe112c201887710cd82b2ce48a57a7be52ae51f0865f575c76b27186aaf822\" pid:5954 exited_at:{seconds:1760335041 nanos:97451653}" Oct 13 05:57:23.891254 systemd[1]: Started sshd@16-10.0.0.145:22-10.0.0.1:55578.service - OpenSSH per-connection server daemon (10.0.0.1:55578). Oct 13 05:57:23.964903 sshd[5974]: Accepted publickey for core from 10.0.0.1 port 55578 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:23.966520 sshd-session[5974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:23.970965 systemd-logind[1543]: New session 17 of user core. Oct 13 05:57:23.981832 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 13 05:57:24.247805 sshd[5978]: Connection closed by 10.0.0.1 port 55578 Oct 13 05:57:24.248112 sshd-session[5974]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:24.257255 systemd[1]: sshd@16-10.0.0.145:22-10.0.0.1:55578.service: Deactivated successfully. Oct 13 05:57:24.259082 systemd[1]: session-17.scope: Deactivated successfully. Oct 13 05:57:24.259992 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Oct 13 05:57:24.263305 systemd[1]: Started sshd@17-10.0.0.145:22-10.0.0.1:55582.service - OpenSSH per-connection server daemon (10.0.0.1:55582). Oct 13 05:57:24.263938 systemd-logind[1543]: Removed session 17. Oct 13 05:57:24.321172 sshd[5991]: Accepted publickey for core from 10.0.0.1 port 55582 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:24.322483 sshd-session[5991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:24.327112 systemd-logind[1543]: New session 18 of user core. Oct 13 05:57:24.345831 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 13 05:57:24.647854 sshd[5994]: Connection closed by 10.0.0.1 port 55582 Oct 13 05:57:24.648180 sshd-session[5991]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:24.657568 systemd[1]: sshd@17-10.0.0.145:22-10.0.0.1:55582.service: Deactivated successfully. Oct 13 05:57:24.659701 systemd[1]: session-18.scope: Deactivated successfully. Oct 13 05:57:24.660628 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Oct 13 05:57:24.663521 systemd[1]: Started sshd@18-10.0.0.145:22-10.0.0.1:55588.service - OpenSSH per-connection server daemon (10.0.0.1:55588). Oct 13 05:57:24.664584 systemd-logind[1543]: Removed session 18. Oct 13 05:57:24.729569 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 55588 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:24.731032 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:24.735571 systemd-logind[1543]: New session 19 of user core. Oct 13 05:57:24.749864 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 13 05:57:25.273076 sshd[6009]: Connection closed by 10.0.0.1 port 55588 Oct 13 05:57:25.274643 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:25.285117 systemd[1]: sshd@18-10.0.0.145:22-10.0.0.1:55588.service: Deactivated successfully. Oct 13 05:57:25.287332 systemd[1]: session-19.scope: Deactivated successfully. Oct 13 05:57:25.288328 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Oct 13 05:57:25.293864 systemd-logind[1543]: Removed session 19. Oct 13 05:57:25.294275 systemd[1]: Started sshd@19-10.0.0.145:22-10.0.0.1:55602.service - OpenSSH per-connection server daemon (10.0.0.1:55602). Oct 13 05:57:25.366429 sshd[6027]: Accepted publickey for core from 10.0.0.1 port 55602 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:25.368004 sshd-session[6027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:25.372793 systemd-logind[1543]: New session 20 of user core. Oct 13 05:57:25.383865 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 13 05:57:25.887360 sshd[6030]: Connection closed by 10.0.0.1 port 55602 Oct 13 05:57:25.888730 sshd-session[6027]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:25.896919 systemd[1]: sshd@19-10.0.0.145:22-10.0.0.1:55602.service: Deactivated successfully. Oct 13 05:57:25.899124 systemd[1]: session-20.scope: Deactivated successfully. Oct 13 05:57:25.900352 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Oct 13 05:57:25.902650 systemd-logind[1543]: Removed session 20. Oct 13 05:57:25.904241 systemd[1]: Started sshd@20-10.0.0.145:22-10.0.0.1:55616.service - OpenSSH per-connection server daemon (10.0.0.1:55616). Oct 13 05:57:25.963046 sshd[6042]: Accepted publickey for core from 10.0.0.1 port 55616 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:25.964519 sshd-session[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:25.969158 systemd-logind[1543]: New session 21 of user core. Oct 13 05:57:25.979840 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 13 05:57:26.304521 sshd[6045]: Connection closed by 10.0.0.1 port 55616 Oct 13 05:57:26.304909 sshd-session[6042]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:26.309875 systemd[1]: sshd@20-10.0.0.145:22-10.0.0.1:55616.service: Deactivated successfully. Oct 13 05:57:26.312360 systemd[1]: session-21.scope: Deactivated successfully. Oct 13 05:57:26.313303 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Oct 13 05:57:26.314984 systemd-logind[1543]: Removed session 21. Oct 13 05:57:31.324876 systemd[1]: Started sshd@21-10.0.0.145:22-10.0.0.1:55622.service - OpenSSH per-connection server daemon (10.0.0.1:55622). Oct 13 05:57:31.385971 sshd[6062]: Accepted publickey for core from 10.0.0.1 port 55622 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:31.387457 sshd-session[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:31.392290 systemd-logind[1543]: New session 22 of user core. Oct 13 05:57:31.401919 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 13 05:57:31.523102 sshd[6065]: Connection closed by 10.0.0.1 port 55622 Oct 13 05:57:31.523431 sshd-session[6062]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:31.528428 systemd[1]: sshd@21-10.0.0.145:22-10.0.0.1:55622.service: Deactivated successfully. Oct 13 05:57:31.530622 systemd[1]: session-22.scope: Deactivated successfully. Oct 13 05:57:31.531404 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Oct 13 05:57:31.532619 systemd-logind[1543]: Removed session 22. Oct 13 05:57:31.604766 containerd[1568]: time="2025-10-13T05:57:31.604591244Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5a10a3acd07d4ac8d3c690078b8b8824ecffec402c4fef72c222518dd6a656c\" id:\"d0947837e91085bad656d2d661b02727d4ddb3d980eaf6af00b829c8454e40a2\" pid:6091 exited_at:{seconds:1760335051 nanos:604292687}" Oct 13 05:57:36.537328 systemd[1]: Started sshd@22-10.0.0.145:22-10.0.0.1:51116.service - OpenSSH per-connection server daemon (10.0.0.1:51116). Oct 13 05:57:36.606443 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 51116 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:36.607995 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:36.612457 systemd-logind[1543]: New session 23 of user core. Oct 13 05:57:36.623831 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 13 05:57:36.751290 sshd[6107]: Connection closed by 10.0.0.1 port 51116 Oct 13 05:57:36.751727 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:36.755637 systemd[1]: sshd@22-10.0.0.145:22-10.0.0.1:51116.service: Deactivated successfully. Oct 13 05:57:36.757699 systemd[1]: session-23.scope: Deactivated successfully. Oct 13 05:57:36.758502 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Oct 13 05:57:36.759749 systemd-logind[1543]: Removed session 23. Oct 13 05:57:41.768682 systemd[1]: Started sshd@23-10.0.0.145:22-10.0.0.1:51130.service - OpenSSH per-connection server daemon (10.0.0.1:51130). Oct 13 05:57:41.822104 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 51130 ssh2: RSA SHA256:Z6cKBKw8+oNb6lspDzCkFEPsX7/7UK9yxsbZQJws+qk Oct 13 05:57:41.823451 sshd-session[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 13 05:57:41.827561 systemd-logind[1543]: New session 24 of user core. Oct 13 05:57:41.837833 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 13 05:57:41.953090 sshd[6125]: Connection closed by 10.0.0.1 port 51130 Oct 13 05:57:41.953426 sshd-session[6122]: pam_unix(sshd:session): session closed for user core Oct 13 05:57:41.957399 systemd[1]: sshd@23-10.0.0.145:22-10.0.0.1:51130.service: Deactivated successfully. Oct 13 05:57:41.959545 systemd[1]: session-24.scope: Deactivated successfully. Oct 13 05:57:41.960553 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Oct 13 05:57:41.961770 systemd-logind[1543]: Removed session 24.