Mar 10 02:06:42.004496 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 23:01:22 -00 2026 Mar 10 02:06:42.004516 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 02:06:42.004572 kernel: BIOS-provided physical RAM map: Mar 10 02:06:42.004579 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 02:06:42.004585 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 02:06:42.004590 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 02:06:42.004596 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 02:06:42.004602 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 02:06:42.004607 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 02:06:42.004613 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 02:06:42.004619 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 02:06:42.004627 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 02:06:42.004633 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 02:06:42.004638 kernel: NX (Execute Disable) protection: active Mar 10 02:06:42.004645 kernel: APIC: Static calls initialized Mar 10 02:06:42.004651 kernel: SMBIOS 2.8 present. Mar 10 02:06:42.004659 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 02:06:42.004665 kernel: DMI: Memory slots populated: 1/1 Mar 10 02:06:42.004671 kernel: Hypervisor detected: KVM Mar 10 02:06:42.004676 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 02:06:42.004682 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 02:06:42.004688 kernel: kvm-clock: using sched offset of 5856701505 cycles Mar 10 02:06:42.004695 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 02:06:42.004701 kernel: tsc: Detected 2445.426 MHz processor Mar 10 02:06:42.004707 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 02:06:42.004713 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 02:06:42.004722 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 02:06:42.004728 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 02:06:42.004734 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 02:06:42.004740 kernel: Using GB pages for direct mapping Mar 10 02:06:42.004746 kernel: ACPI: Early table checksum verification disabled Mar 10 02:06:42.004753 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 02:06:42.004759 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004770 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004781 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004796 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 02:06:42.004807 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004818 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004828 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004839 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 02:06:42.004854 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 02:06:42.004868 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 02:06:42.004879 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 02:06:42.004891 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 02:06:42.004897 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 02:06:42.004903 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 02:06:42.004910 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 02:06:42.004916 kernel: No NUMA configuration found Mar 10 02:06:42.004922 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 02:06:42.004931 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 10 02:06:42.004937 kernel: Zone ranges: Mar 10 02:06:42.004960 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 02:06:42.004967 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 02:06:42.004973 kernel: Normal empty Mar 10 02:06:42.004980 kernel: Device empty Mar 10 02:06:42.004986 kernel: Movable zone start for each node Mar 10 02:06:42.004992 kernel: Early memory node ranges Mar 10 02:06:42.004998 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 02:06:42.005004 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 02:06:42.005013 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 02:06:42.005020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 02:06:42.005026 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 02:06:42.005032 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 02:06:42.005038 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 02:06:42.005045 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 02:06:42.005051 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 02:06:42.005057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 02:06:42.005063 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 02:06:42.005072 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 02:06:42.005078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 02:06:42.005084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 02:06:42.005090 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 02:06:42.005097 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 02:06:42.005103 kernel: TSC deadline timer available Mar 10 02:06:42.005109 kernel: CPU topo: Max. logical packages: 1 Mar 10 02:06:42.005115 kernel: CPU topo: Max. logical dies: 1 Mar 10 02:06:42.005122 kernel: CPU topo: Max. dies per package: 1 Mar 10 02:06:42.005130 kernel: CPU topo: Max. threads per core: 1 Mar 10 02:06:42.005136 kernel: CPU topo: Num. cores per package: 4 Mar 10 02:06:42.005142 kernel: CPU topo: Num. threads per package: 4 Mar 10 02:06:42.005148 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 10 02:06:42.005154 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 02:06:42.005161 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 02:06:42.005167 kernel: kvm-guest: setup PV sched yield Mar 10 02:06:42.005173 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 02:06:42.005185 kernel: Booting paravirtualized kernel on KVM Mar 10 02:06:42.005197 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 02:06:42.005212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 02:06:42.005224 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 10 02:06:42.005231 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 10 02:06:42.005264 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 02:06:42.005270 kernel: kvm-guest: PV spinlocks enabled Mar 10 02:06:42.005277 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 02:06:42.005284 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 02:06:42.005291 kernel: random: crng init done Mar 10 02:06:42.005300 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 02:06:42.005307 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 02:06:42.005313 kernel: Fallback order for Node 0: 0 Mar 10 02:06:42.005319 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 10 02:06:42.005325 kernel: Policy zone: DMA32 Mar 10 02:06:42.005332 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 02:06:42.005338 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 02:06:42.005344 kernel: ftrace: allocating 40099 entries in 157 pages Mar 10 02:06:42.005351 kernel: ftrace: allocated 157 pages with 5 groups Mar 10 02:06:42.005359 kernel: Dynamic Preempt: voluntary Mar 10 02:06:42.005365 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 02:06:42.005372 kernel: rcu: RCU event tracing is enabled. Mar 10 02:06:42.005379 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 02:06:42.005385 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 02:06:42.005392 kernel: Rude variant of Tasks RCU enabled. Mar 10 02:06:42.005398 kernel: Tracing variant of Tasks RCU enabled. Mar 10 02:06:42.005404 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 02:06:42.005440 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 02:06:42.005447 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 02:06:42.005457 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 02:06:42.005463 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 02:06:42.005470 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 02:06:42.005476 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 02:06:42.005489 kernel: Console: colour VGA+ 80x25 Mar 10 02:06:42.005498 kernel: printk: legacy console [ttyS0] enabled Mar 10 02:06:42.005504 kernel: ACPI: Core revision 20240827 Mar 10 02:06:42.005511 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 02:06:42.005517 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 02:06:42.005524 kernel: x2apic enabled Mar 10 02:06:42.005530 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 02:06:42.005539 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 02:06:42.005546 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 02:06:42.005553 kernel: kvm-guest: setup PV IPIs Mar 10 02:06:42.005559 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 02:06:42.005566 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 10 02:06:42.005575 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 10 02:06:42.005581 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 02:06:42.005588 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 02:06:42.005594 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 02:06:42.005601 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 02:06:42.005608 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 02:06:42.005614 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 02:06:42.005621 kernel: Speculative Store Bypass: Vulnerable Mar 10 02:06:42.005627 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 02:06:42.005637 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 02:06:42.005643 kernel: active return thunk: srso_alias_return_thunk Mar 10 02:06:42.005650 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 02:06:42.005657 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 02:06:42.005663 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 02:06:42.005670 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 02:06:42.005676 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 02:06:42.005683 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 02:06:42.005692 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 02:06:42.005699 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 02:06:42.005705 kernel: Freeing SMP alternatives memory: 32K Mar 10 02:06:42.005712 kernel: pid_max: default: 32768 minimum: 301 Mar 10 02:06:42.005718 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 10 02:06:42.005725 kernel: landlock: Up and running. Mar 10 02:06:42.005731 kernel: SELinux: Initializing. Mar 10 02:06:42.005738 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 02:06:42.005744 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 02:06:42.005753 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 02:06:42.005760 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 02:06:42.005773 kernel: signal: max sigframe size: 1776 Mar 10 02:06:42.005785 kernel: rcu: Hierarchical SRCU implementation. Mar 10 02:06:42.005797 kernel: rcu: Max phase no-delay instances is 400. Mar 10 02:06:42.005809 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 10 02:06:42.005821 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 02:06:42.005833 kernel: smp: Bringing up secondary CPUs ... Mar 10 02:06:42.005845 kernel: smpboot: x86: Booting SMP configuration: Mar 10 02:06:42.005861 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 02:06:42.005872 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 02:06:42.005879 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 10 02:06:42.005886 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Mar 10 02:06:42.005893 kernel: devtmpfs: initialized Mar 10 02:06:42.005900 kernel: x86/mm: Memory block size: 128MB Mar 10 02:06:42.005906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 02:06:42.005913 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 02:06:42.005920 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 02:06:42.005928 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 02:06:42.005935 kernel: audit: initializing netlink subsys (disabled) Mar 10 02:06:42.005942 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 02:06:42.005951 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 02:06:42.005964 kernel: audit: type=2000 audit(1773108398.389:1): state=initialized audit_enabled=0 res=1 Mar 10 02:06:42.005975 kernel: cpuidle: using governor menu Mar 10 02:06:42.005987 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 02:06:42.006000 kernel: dca service started, version 1.12.1 Mar 10 02:06:42.006012 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 10 02:06:42.006029 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 02:06:42.006040 kernel: PCI: Using configuration type 1 for base access Mar 10 02:06:42.006047 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 02:06:42.006054 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 02:06:42.006066 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 02:06:42.006080 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 02:06:42.006091 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 02:06:42.006104 kernel: ACPI: Added _OSI(Module Device) Mar 10 02:06:42.006115 kernel: ACPI: Added _OSI(Processor Device) Mar 10 02:06:42.006133 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 02:06:42.006145 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 02:06:42.006175 kernel: ACPI: Interpreter enabled Mar 10 02:06:42.006185 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 02:06:42.006215 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 02:06:42.006262 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 02:06:42.006308 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 02:06:42.006333 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 02:06:42.006344 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 02:06:42.006676 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 02:06:42.006870 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 02:06:42.007079 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 02:06:42.007096 kernel: PCI host bridge to bus 0000:00 Mar 10 02:06:42.007362 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 02:06:42.007587 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 02:06:42.007763 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 02:06:42.007936 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 02:06:42.008101 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 02:06:42.008307 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 02:06:42.008474 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 02:06:42.008616 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 10 02:06:42.008743 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 10 02:06:42.008921 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 10 02:06:42.009045 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 10 02:06:42.009261 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 10 02:06:42.009551 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 02:06:42.009718 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 10 02:06:42.010181 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 10 02:06:42.010345 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 10 02:06:42.010524 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 02:06:42.010652 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 10 02:06:42.010777 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 10 02:06:42.010954 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 10 02:06:42.011073 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 02:06:42.011208 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 10 02:06:42.011399 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 10 02:06:42.011566 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 10 02:06:42.011683 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 02:06:42.011825 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 10 02:06:42.011971 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 10 02:06:42.012089 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 02:06:42.012276 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 10 02:06:42.012406 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 10 02:06:42.012570 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 10 02:06:42.012851 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 10 02:06:42.012993 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 10 02:06:42.013005 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 02:06:42.013013 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 02:06:42.013019 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 02:06:42.013030 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 02:06:42.013037 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 02:06:42.013044 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 02:06:42.013050 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 02:06:42.013057 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 02:06:42.013064 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 02:06:42.013071 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 02:06:42.013077 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 02:06:42.013084 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 02:06:42.013093 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 02:06:42.013100 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 02:06:42.013107 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 02:06:42.013113 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 02:06:42.013120 kernel: iommu: Default domain type: Translated Mar 10 02:06:42.013126 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 02:06:42.013133 kernel: PCI: Using ACPI for IRQ routing Mar 10 02:06:42.013140 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 02:06:42.013146 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 02:06:42.013155 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 02:06:42.013332 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 02:06:42.013495 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 02:06:42.013613 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 02:06:42.013623 kernel: vgaarb: loaded Mar 10 02:06:42.013630 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 02:06:42.013637 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 02:06:42.013643 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 02:06:42.013650 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 02:06:42.013661 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 02:06:42.013668 kernel: pnp: PnP ACPI init Mar 10 02:06:42.013812 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 02:06:42.013832 kernel: pnp: PnP ACPI: found 6 devices Mar 10 02:06:42.013845 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 02:06:42.013858 kernel: NET: Registered PF_INET protocol family Mar 10 02:06:42.013866 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 02:06:42.013872 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 02:06:42.013883 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 02:06:42.013890 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 02:06:42.013897 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 02:06:42.013904 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 02:06:42.013910 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 02:06:42.013917 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 02:06:42.013924 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 02:06:42.013930 kernel: NET: Registered PF_XDP protocol family Mar 10 02:06:42.014046 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 02:06:42.014157 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 02:06:42.014406 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 02:06:42.014556 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 02:06:42.014664 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 02:06:42.014770 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 02:06:42.014779 kernel: PCI: CLS 0 bytes, default 64 Mar 10 02:06:42.014787 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 10 02:06:42.014793 kernel: Initialise system trusted keyrings Mar 10 02:06:42.014805 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 02:06:42.014811 kernel: Key type asymmetric registered Mar 10 02:06:42.014818 kernel: Asymmetric key parser 'x509' registered Mar 10 02:06:42.014825 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 10 02:06:42.014831 kernel: io scheduler mq-deadline registered Mar 10 02:06:42.014838 kernel: io scheduler kyber registered Mar 10 02:06:42.014845 kernel: io scheduler bfq registered Mar 10 02:06:42.014851 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 02:06:42.014859 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 02:06:42.014868 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 02:06:42.014875 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 02:06:42.014881 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 02:06:42.014888 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 02:06:42.014895 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 02:06:42.014902 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 02:06:42.014909 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 02:06:42.015031 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 02:06:42.015042 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 02:06:42.015161 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 02:06:42.015349 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T02:06:41 UTC (1773108401) Mar 10 02:06:42.015602 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 02:06:42.015616 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 02:06:42.015624 kernel: NET: Registered PF_INET6 protocol family Mar 10 02:06:42.015630 kernel: Segment Routing with IPv6 Mar 10 02:06:42.015637 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 02:06:42.015645 kernel: NET: Registered PF_PACKET protocol family Mar 10 02:06:42.015663 kernel: Key type dns_resolver registered Mar 10 02:06:42.015696 kernel: IPI shorthand broadcast: enabled Mar 10 02:06:42.015707 kernel: sched_clock: Marking stable (3119013577, 433551031)->(3732551257, -179986649) Mar 10 02:06:42.015718 kernel: registered taskstats version 1 Mar 10 02:06:42.015729 kernel: Loading compiled-in X.509 certificates Mar 10 02:06:42.015741 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 64a6e3ad023f02465a8c66e81554b4b2e64fb972' Mar 10 02:06:42.015753 kernel: Demotion targets for Node 0: null Mar 10 02:06:42.015765 kernel: Key type .fscrypt registered Mar 10 02:06:42.015772 kernel: Key type fscrypt-provisioning registered Mar 10 02:06:42.015782 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 02:06:42.015789 kernel: ima: Allocated hash algorithm: sha1 Mar 10 02:06:42.015796 kernel: ima: No architecture policies found Mar 10 02:06:42.015803 kernel: clk: Disabling unused clocks Mar 10 02:06:42.015809 kernel: Warning: unable to open an initial console. Mar 10 02:06:42.015816 kernel: Freeing unused kernel image (initmem) memory: 46204K Mar 10 02:06:42.015823 kernel: Write protecting the kernel read-only data: 40960k Mar 10 02:06:42.015830 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 10 02:06:42.015839 kernel: Run /init as init process Mar 10 02:06:42.015846 kernel: with arguments: Mar 10 02:06:42.015852 kernel: /init Mar 10 02:06:42.015859 kernel: with environment: Mar 10 02:06:42.015866 kernel: HOME=/ Mar 10 02:06:42.015872 kernel: TERM=linux Mar 10 02:06:42.015880 systemd[1]: Successfully made /usr/ read-only. Mar 10 02:06:42.015890 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 02:06:42.015900 systemd[1]: Detected virtualization kvm. Mar 10 02:06:42.015907 systemd[1]: Detected architecture x86-64. Mar 10 02:06:42.015914 systemd[1]: Running in initrd. Mar 10 02:06:42.015921 systemd[1]: No hostname configured, using default hostname. Mar 10 02:06:42.015929 systemd[1]: Hostname set to . Mar 10 02:06:42.015936 systemd[1]: Initializing machine ID from VM UUID. Mar 10 02:06:42.015943 systemd[1]: Queued start job for default target initrd.target. Mar 10 02:06:42.015951 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:06:42.015968 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:06:42.015979 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 02:06:42.015986 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 02:06:42.015994 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 02:06:42.016002 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 02:06:42.016013 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 02:06:42.016020 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 02:06:42.016028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:06:42.016035 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:06:42.016043 systemd[1]: Reached target paths.target - Path Units. Mar 10 02:06:42.016050 systemd[1]: Reached target slices.target - Slice Units. Mar 10 02:06:42.016058 systemd[1]: Reached target swap.target - Swaps. Mar 10 02:06:42.016065 systemd[1]: Reached target timers.target - Timer Units. Mar 10 02:06:42.016072 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 02:06:42.016082 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 02:06:42.016090 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 02:06:42.016097 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 10 02:06:42.016105 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:06:42.016112 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 02:06:42.016119 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:06:42.016127 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 02:06:42.016134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 02:06:42.016144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 02:06:42.016151 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 02:06:42.016159 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 10 02:06:42.016167 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 02:06:42.016174 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 02:06:42.016182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 02:06:42.016194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:06:42.016208 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 02:06:42.016228 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:06:42.016303 systemd-journald[203]: Collecting audit messages is disabled. Mar 10 02:06:42.016330 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 02:06:42.016338 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 02:06:42.016346 systemd-journald[203]: Journal started Mar 10 02:06:42.016365 systemd-journald[203]: Runtime Journal (/run/log/journal/e1da395963d047468112d79b2b1056b6) is 6M, max 48.3M, 42.2M free. Mar 10 02:06:42.009643 systemd-modules-load[204]: Inserted module 'overlay' Mar 10 02:06:42.026440 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 02:06:42.034222 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 02:06:42.050484 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 02:06:42.052742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 02:06:42.155276 kernel: Bridge firewalling registered Mar 10 02:06:42.052958 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 10 02:06:42.174710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 02:06:42.179669 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:06:42.191364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 02:06:42.192752 systemd-tmpfiles[215]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 10 02:06:42.194119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 02:06:42.223720 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 02:06:42.231983 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:06:42.249676 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:06:42.251707 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:06:42.266381 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 02:06:42.272025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 02:06:42.274548 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 02:06:42.311790 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 02:06:42.330743 systemd-resolved[240]: Positive Trust Anchors: Mar 10 02:06:42.330755 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 02:06:42.330798 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 02:06:42.335561 systemd-resolved[240]: Defaulting to hostname 'linux'. Mar 10 02:06:42.337045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 02:06:42.339566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:06:42.458465 kernel: SCSI subsystem initialized Mar 10 02:06:42.468518 kernel: Loading iSCSI transport class v2.0-870. Mar 10 02:06:42.481467 kernel: iscsi: registered transport (tcp) Mar 10 02:06:42.502783 kernel: iscsi: registered transport (qla4xxx) Mar 10 02:06:42.502843 kernel: QLogic iSCSI HBA Driver Mar 10 02:06:42.527986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 02:06:42.557857 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:06:42.562183 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 02:06:42.621918 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 02:06:42.626142 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 02:06:42.700474 kernel: raid6: avx2x4 gen() 24930 MB/s Mar 10 02:06:42.718469 kernel: raid6: avx2x2 gen() 23409 MB/s Mar 10 02:06:42.737264 kernel: raid6: avx2x1 gen() 16992 MB/s Mar 10 02:06:42.737297 kernel: raid6: using algorithm avx2x4 gen() 24930 MB/s Mar 10 02:06:42.757513 kernel: raid6: .... xor() 4239 MB/s, rmw enabled Mar 10 02:06:42.757549 kernel: raid6: using avx2x2 recovery algorithm Mar 10 02:06:42.783476 kernel: xor: automatically using best checksumming function avx Mar 10 02:06:42.940490 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 02:06:42.948267 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 02:06:42.953736 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:06:42.986337 systemd-udevd[455]: Using default interface naming scheme 'v255'. Mar 10 02:06:42.992065 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:06:42.994607 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 02:06:43.022974 dracut-pre-trigger[457]: rd.md=0: removing MD RAID activation Mar 10 02:06:43.055773 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 02:06:43.060003 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 02:06:43.147518 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:06:43.150981 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 02:06:43.202470 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 02:06:43.204488 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 02:06:43.214458 kernel: libata version 3.00 loaded. Mar 10 02:06:43.218312 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 02:06:43.219532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:06:43.219647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:06:43.227149 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:06:43.242028 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 02:06:43.242055 kernel: GPT:9289727 != 19775487 Mar 10 02:06:43.242065 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 02:06:43.242075 kernel: GPT:9289727 != 19775487 Mar 10 02:06:43.242084 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 02:06:43.242093 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 02:06:43.231130 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:06:43.243341 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:06:43.251474 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 02:06:43.254612 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 02:06:43.254635 kernel: AES CTR mode by8 optimization enabled Mar 10 02:06:43.261545 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 10 02:06:43.261724 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 10 02:06:43.261873 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 02:06:43.281487 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 10 02:06:43.286542 kernel: scsi host0: ahci Mar 10 02:06:43.286766 kernel: scsi host1: ahci Mar 10 02:06:43.289286 kernel: scsi host2: ahci Mar 10 02:06:43.293468 kernel: scsi host3: ahci Mar 10 02:06:43.296470 kernel: scsi host4: ahci Mar 10 02:06:43.298473 kernel: scsi host5: ahci Mar 10 02:06:43.307489 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 10 02:06:43.307512 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 10 02:06:43.307523 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 10 02:06:43.307532 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 10 02:06:43.311339 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 10 02:06:43.311361 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 10 02:06:43.319837 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 02:06:43.434122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:06:43.450231 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 02:06:43.465448 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 02:06:43.481474 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 02:06:43.483167 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 02:06:43.497524 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 02:06:43.524660 disk-uuid[619]: Primary Header is updated. Mar 10 02:06:43.524660 disk-uuid[619]: Secondary Entries is updated. Mar 10 02:06:43.524660 disk-uuid[619]: Secondary Header is updated. Mar 10 02:06:43.534328 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 02:06:43.621516 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 02:06:43.621583 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 02:06:43.624459 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 02:06:43.626497 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 02:06:43.628460 kernel: ata3.00: LPM support broken, forcing max_power Mar 10 02:06:43.631275 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 02:06:43.631296 kernel: ata3.00: applying bridge limits Mar 10 02:06:43.633475 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 02:06:43.637484 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 02:06:43.637520 kernel: ata3.00: LPM support broken, forcing max_power Mar 10 02:06:43.640308 kernel: ata3.00: configured for UDMA/100 Mar 10 02:06:43.644454 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 02:06:43.697469 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 02:06:43.697689 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 02:06:43.710446 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 02:06:44.132640 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 02:06:44.135820 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 02:06:44.141005 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:06:44.143910 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 02:06:44.150012 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 02:06:44.184657 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 02:06:44.544539 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 02:06:44.544902 disk-uuid[620]: The operation has completed successfully. Mar 10 02:06:44.581843 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 02:06:44.581987 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 02:06:44.609605 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 02:06:44.640296 sh[649]: Success Mar 10 02:06:44.663237 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 02:06:44.663349 kernel: device-mapper: uevent: version 1.0.3 Mar 10 02:06:44.666213 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 10 02:06:44.680487 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 10 02:06:44.723793 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 02:06:44.732698 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 02:06:44.753202 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 02:06:44.770821 kernel: BTRFS: device fsid 91a17919-8e0b-4e39-b5e3-1547b6175986 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (661) Mar 10 02:06:44.770850 kernel: BTRFS info (device dm-0): first mount of filesystem 91a17919-8e0b-4e39-b5e3-1547b6175986 Mar 10 02:06:44.770867 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:06:44.776753 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 10 02:06:44.776786 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 10 02:06:44.778089 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 02:06:44.781317 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 10 02:06:44.783230 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 02:06:44.784144 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 02:06:44.804140 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 02:06:44.833467 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Mar 10 02:06:44.839477 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:06:44.839506 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:06:44.845847 kernel: BTRFS info (device vda6): turning on async discard Mar 10 02:06:44.845873 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 02:06:44.854469 kernel: BTRFS info (device vda6): last unmount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:06:44.856558 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 02:06:44.859994 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 02:06:44.955228 ignition[744]: Ignition 2.22.0 Mar 10 02:06:44.955286 ignition[744]: Stage: fetch-offline Mar 10 02:06:44.955341 ignition[744]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:06:44.955354 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:06:44.955500 ignition[744]: parsed url from cmdline: "" Mar 10 02:06:44.955505 ignition[744]: no config URL provided Mar 10 02:06:44.955513 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 02:06:44.955525 ignition[744]: no config at "/usr/lib/ignition/user.ign" Mar 10 02:06:44.955550 ignition[744]: op(1): [started] loading QEMU firmware config module Mar 10 02:06:44.955557 ignition[744]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 02:06:44.974320 ignition[744]: op(1): [finished] loading QEMU firmware config module Mar 10 02:06:44.978965 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 02:06:44.983502 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 02:06:45.031073 systemd-networkd[842]: lo: Link UP Mar 10 02:06:45.031097 systemd-networkd[842]: lo: Gained carrier Mar 10 02:06:45.032723 systemd-networkd[842]: Enumeration completed Mar 10 02:06:45.032799 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 02:06:45.033793 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:06:45.033797 systemd-networkd[842]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:06:45.034293 systemd-networkd[842]: eth0: Link UP Mar 10 02:06:45.035893 systemd-networkd[842]: eth0: Gained carrier Mar 10 02:06:45.035903 systemd-networkd[842]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:06:45.036788 systemd[1]: Reached target network.target - Network. Mar 10 02:06:45.071486 systemd-networkd[842]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 02:06:45.178153 ignition[744]: parsing config with SHA512: e66f8e9b7bc0c50f9f2a2e6fe26ee2dee98e8e81f0562efa7a765eedf672f48ed9b0587a09b8467c00b8a499c9f0a304f54ed0919d9cd390dba43298d70491f4 Mar 10 02:06:45.185831 unknown[744]: fetched base config from "system" Mar 10 02:06:45.185860 unknown[744]: fetched user config from "qemu" Mar 10 02:06:45.191213 ignition[744]: fetch-offline: fetch-offline passed Mar 10 02:06:45.191333 ignition[744]: Ignition finished successfully Mar 10 02:06:45.199309 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 02:06:45.203352 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 02:06:45.204297 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 02:06:45.252928 ignition[847]: Ignition 2.22.0 Mar 10 02:06:45.252960 ignition[847]: Stage: kargs Mar 10 02:06:45.253070 ignition[847]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:06:45.253079 ignition[847]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:06:45.253681 ignition[847]: kargs: kargs passed Mar 10 02:06:45.253722 ignition[847]: Ignition finished successfully Mar 10 02:06:45.267196 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 02:06:45.270022 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 02:06:45.310153 ignition[855]: Ignition 2.22.0 Mar 10 02:06:45.310184 ignition[855]: Stage: disks Mar 10 02:06:45.310334 ignition[855]: no configs at "/usr/lib/ignition/base.d" Mar 10 02:06:45.310344 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:06:45.310905 ignition[855]: disks: disks passed Mar 10 02:06:45.310945 ignition[855]: Ignition finished successfully Mar 10 02:06:45.318630 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 02:06:45.323249 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 02:06:45.329543 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 02:06:45.333530 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 02:06:45.336813 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 02:06:45.338489 systemd[1]: Reached target basic.target - Basic System. Mar 10 02:06:45.347101 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 02:06:45.383740 systemd-fsck[865]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 10 02:06:45.391133 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 02:06:45.398499 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 02:06:45.520485 kernel: EXT4-fs (vda9): mounted filesystem 494bf987-03e9-4980-9fc3-4af435e63ebe r/w with ordered data mode. Quota mode: none. Mar 10 02:06:45.520921 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 02:06:45.523987 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 02:06:45.530538 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 02:06:45.554941 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 02:06:45.567729 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (873) Mar 10 02:06:45.558053 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 02:06:45.584496 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:06:45.584517 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:06:45.584533 kernel: BTRFS info (device vda6): turning on async discard Mar 10 02:06:45.584543 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 02:06:45.558098 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 02:06:45.558121 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 02:06:45.568719 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 02:06:45.585527 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 02:06:45.596028 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 02:06:45.644731 initrd-setup-root[897]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 02:06:45.649589 initrd-setup-root[904]: cut: /sysroot/etc/group: No such file or directory Mar 10 02:06:45.655553 initrd-setup-root[911]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 02:06:45.662726 initrd-setup-root[918]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 02:06:45.761481 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 02:06:45.769094 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 02:06:45.770991 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 02:06:45.799020 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 02:06:45.804689 kernel: BTRFS info (device vda6): last unmount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:06:45.830616 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 02:06:45.853764 ignition[987]: INFO : Ignition 2.22.0 Mar 10 02:06:45.853764 ignition[987]: INFO : Stage: mount Mar 10 02:06:45.857282 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:06:45.857282 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:06:45.857282 ignition[987]: INFO : mount: mount passed Mar 10 02:06:45.857282 ignition[987]: INFO : Ignition finished successfully Mar 10 02:06:45.863158 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 02:06:45.866781 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 02:06:45.892649 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 02:06:45.930834 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (999) Mar 10 02:06:45.930868 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 02:06:45.930879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 02:06:45.939665 kernel: BTRFS info (device vda6): turning on async discard Mar 10 02:06:45.939694 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 02:06:45.941593 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 02:06:45.986297 ignition[1016]: INFO : Ignition 2.22.0 Mar 10 02:06:45.986297 ignition[1016]: INFO : Stage: files Mar 10 02:06:45.991375 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:06:45.991375 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:06:45.991375 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Mar 10 02:06:45.991375 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 02:06:45.991375 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 02:06:45.991375 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 02:06:45.991375 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 02:06:46.014642 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 02:06:46.014642 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 02:06:46.014642 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 02:06:45.991661 unknown[1016]: wrote ssh authorized keys file for user: core Mar 10 02:06:46.063293 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 02:06:46.209590 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 02:06:46.209590 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 02:06:46.221834 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 10 02:06:46.676471 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 02:06:46.979647 systemd-networkd[842]: eth0: Gained IPv6LL Mar 10 02:06:47.284002 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 10 02:06:47.284002 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 10 02:06:47.294403 ignition[1016]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 02:06:47.335239 ignition[1016]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 02:06:47.343953 ignition[1016]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 02:06:47.347997 ignition[1016]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 02:06:47.347997 ignition[1016]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 10 02:06:47.347997 ignition[1016]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 02:06:47.358561 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 02:06:47.358561 ignition[1016]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 02:06:47.358561 ignition[1016]: INFO : files: files passed Mar 10 02:06:47.358561 ignition[1016]: INFO : Ignition finished successfully Mar 10 02:06:47.359357 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 02:06:47.361878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 02:06:47.383086 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 02:06:47.386814 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 02:06:47.386932 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 02:06:47.416296 initrd-setup-root-after-ignition[1046]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 02:06:47.423325 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:06:47.423325 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:06:47.431408 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 02:06:47.437243 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 02:06:47.440884 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 02:06:47.449060 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 02:06:47.517623 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 02:06:47.517827 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 02:06:47.523527 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 02:06:47.528082 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 02:06:47.532770 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 02:06:47.538638 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 02:06:47.576992 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 02:06:47.585022 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 02:06:47.617851 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:06:47.619188 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:06:47.625539 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 02:06:47.631326 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 02:06:47.631586 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 02:06:47.639902 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 02:06:47.641545 systemd[1]: Stopped target basic.target - Basic System. Mar 10 02:06:47.649339 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 02:06:47.653526 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 02:06:47.659505 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 02:06:47.665326 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 10 02:06:47.670578 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 02:06:47.676050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 02:06:47.686382 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 02:06:47.687502 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 02:06:47.696563 systemd[1]: Stopped target swap.target - Swaps. Mar 10 02:06:47.698008 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 02:06:47.698156 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 02:06:47.705623 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:06:47.710795 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:06:47.716350 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 02:06:47.719265 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:06:47.721072 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 02:06:47.721190 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 02:06:47.733403 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 02:06:47.733623 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 02:06:47.738464 systemd[1]: Stopped target paths.target - Path Units. Mar 10 02:06:47.743070 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 02:06:47.745884 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:06:47.750969 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 02:06:47.752459 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 02:06:47.760060 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 02:06:47.760179 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 02:06:47.764179 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 02:06:47.764323 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 02:06:47.768403 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 02:06:47.768572 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 02:06:47.772985 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 02:06:47.773096 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 02:06:47.780920 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 02:06:47.790656 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 02:06:47.791965 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 02:06:47.792070 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:06:47.798083 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 02:06:47.798198 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 02:06:47.820608 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 02:06:47.820746 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 02:06:47.832118 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 02:06:47.840525 ignition[1072]: INFO : Ignition 2.22.0 Mar 10 02:06:47.840525 ignition[1072]: INFO : Stage: umount Mar 10 02:06:47.844864 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 02:06:47.844864 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 02:06:47.844864 ignition[1072]: INFO : umount: umount passed Mar 10 02:06:47.844864 ignition[1072]: INFO : Ignition finished successfully Mar 10 02:06:47.851481 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 02:06:47.851607 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 02:06:47.859494 systemd[1]: Stopped target network.target - Network. Mar 10 02:06:47.863676 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 02:06:47.863745 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 02:06:47.868061 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 02:06:47.868114 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 02:06:47.869341 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 02:06:47.869400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 02:06:47.875155 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 02:06:47.875214 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 02:06:47.880132 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 02:06:47.885235 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 02:06:47.906233 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 02:06:47.906451 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 02:06:47.910998 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 02:06:47.911165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 02:06:47.920882 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 10 02:06:47.921135 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 02:06:47.921268 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 02:06:47.928192 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 10 02:06:47.929213 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 10 02:06:47.931306 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 02:06:47.931351 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:06:47.935486 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 02:06:47.935537 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 02:06:47.950087 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 02:06:47.954309 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 02:06:47.954365 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 02:06:47.959896 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 02:06:47.959968 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:06:47.967060 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 02:06:47.967106 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 02:06:47.968909 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 02:06:47.968957 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:06:47.980138 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:06:47.982915 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 10 02:06:47.982980 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:06:48.002604 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 02:06:48.002751 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 02:06:48.015225 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 02:06:48.015506 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:06:48.020889 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 02:06:48.020933 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 02:06:48.026983 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 02:06:48.027024 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:06:48.028396 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 02:06:48.028480 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 02:06:48.039137 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 02:06:48.039186 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 02:06:48.046156 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 02:06:48.046207 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 02:06:48.058189 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 02:06:48.059322 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 10 02:06:48.059391 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:06:48.071524 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 02:06:48.071590 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:06:48.078743 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 10 02:06:48.078801 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 02:06:48.086743 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 02:06:48.086801 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:06:48.092584 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 02:06:48.092645 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:06:48.102990 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 10 02:06:48.103065 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Mar 10 02:06:48.103131 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 10 02:06:48.103203 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 10 02:06:48.103750 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 02:06:48.103901 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 02:06:48.110010 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 02:06:48.122053 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 02:06:48.147538 systemd[1]: Switching root. Mar 10 02:06:48.183729 systemd-journald[203]: Journal stopped Mar 10 02:06:49.432156 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Mar 10 02:06:49.432218 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 02:06:49.432237 kernel: SELinux: policy capability open_perms=1 Mar 10 02:06:49.432247 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 02:06:49.432257 kernel: SELinux: policy capability always_check_network=0 Mar 10 02:06:49.432267 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 02:06:49.432303 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 02:06:49.432321 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 02:06:49.432332 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 02:06:49.432342 kernel: SELinux: policy capability userspace_initial_context=0 Mar 10 02:06:49.432352 kernel: audit: type=1403 audit(1773108408.394:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 02:06:49.432367 systemd[1]: Successfully loaded SELinux policy in 79.075ms. Mar 10 02:06:49.432391 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.236ms. Mar 10 02:06:49.432403 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 02:06:49.433771 systemd[1]: Detected virtualization kvm. Mar 10 02:06:49.433796 systemd[1]: Detected architecture x86-64. Mar 10 02:06:49.433868 systemd[1]: Detected first boot. Mar 10 02:06:49.433881 systemd[1]: Initializing machine ID from VM UUID. Mar 10 02:06:49.433892 zram_generator::config[1118]: No configuration found. Mar 10 02:06:49.433905 kernel: Guest personality initialized and is inactive Mar 10 02:06:49.433916 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 10 02:06:49.433926 kernel: Initialized host personality Mar 10 02:06:49.433936 kernel: NET: Registered PF_VSOCK protocol family Mar 10 02:06:49.433950 systemd[1]: Populated /etc with preset unit settings. Mar 10 02:06:49.433961 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 10 02:06:49.433973 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 02:06:49.433984 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 02:06:49.433995 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 02:06:49.434006 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 02:06:49.434016 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 02:06:49.434027 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 02:06:49.434038 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 02:06:49.434051 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 02:06:49.434062 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 02:06:49.434073 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 02:06:49.434083 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 02:06:49.434094 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 02:06:49.434105 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 02:06:49.434117 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 02:06:49.434128 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 02:06:49.434139 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 02:06:49.434152 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 02:06:49.434164 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 02:06:49.434174 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 02:06:49.434185 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 02:06:49.434196 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 02:06:49.434207 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 02:06:49.434218 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 02:06:49.434231 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 02:06:49.434242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 02:06:49.434252 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 02:06:49.434263 systemd[1]: Reached target slices.target - Slice Units. Mar 10 02:06:49.434274 systemd[1]: Reached target swap.target - Swaps. Mar 10 02:06:49.434315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 02:06:49.434327 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 02:06:49.434339 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 10 02:06:49.434350 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 02:06:49.434360 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 02:06:49.434374 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 02:06:49.434384 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 02:06:49.434396 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 02:06:49.434407 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 02:06:49.434464 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 02:06:49.434476 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:06:49.434487 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 02:06:49.434498 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 02:06:49.434512 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 02:06:49.434523 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 02:06:49.434534 systemd[1]: Reached target machines.target - Containers. Mar 10 02:06:49.434545 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 02:06:49.434556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:06:49.434566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 02:06:49.434577 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 02:06:49.434588 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:06:49.434599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 02:06:49.434612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:06:49.434623 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 02:06:49.434633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:06:49.434644 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 02:06:49.434655 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 02:06:49.434666 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 02:06:49.434678 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 02:06:49.434689 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 02:06:49.434702 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:06:49.434713 kernel: fuse: init (API version 7.41) Mar 10 02:06:49.434783 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 02:06:49.434797 kernel: loop: module loaded Mar 10 02:06:49.434807 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 02:06:49.434818 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 02:06:49.434829 kernel: ACPI: bus type drm_connector registered Mar 10 02:06:49.434839 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 02:06:49.434850 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 10 02:06:49.434863 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 02:06:49.434899 systemd-journald[1203]: Collecting audit messages is disabled. Mar 10 02:06:49.434921 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 02:06:49.434937 systemd[1]: Stopped verity-setup.service. Mar 10 02:06:49.434949 systemd-journald[1203]: Journal started Mar 10 02:06:49.434972 systemd-journald[1203]: Runtime Journal (/run/log/journal/e1da395963d047468112d79b2b1056b6) is 6M, max 48.3M, 42.2M free. Mar 10 02:06:48.998116 systemd[1]: Queued start job for default target multi-user.target. Mar 10 02:06:49.012605 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 02:06:49.013177 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 02:06:49.440476 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:06:49.447265 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 02:06:49.448041 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 02:06:49.450628 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 02:06:49.453336 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 02:06:49.455757 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 02:06:49.458483 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 02:06:49.461199 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 02:06:49.463813 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 02:06:49.466957 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 02:06:49.470261 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 02:06:49.470614 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 02:06:49.473805 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:06:49.474040 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:06:49.477097 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 02:06:49.477358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 02:06:49.480273 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:06:49.480576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:06:49.483819 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 02:06:49.484050 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 02:06:49.486981 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:06:49.487244 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:06:49.490370 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 02:06:49.493557 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 02:06:49.496883 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 02:06:49.500211 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 10 02:06:49.513841 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 02:06:49.519029 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 02:06:49.523752 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 02:06:49.527541 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 02:06:49.527582 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 02:06:49.531997 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 10 02:06:49.542891 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 02:06:49.546776 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:06:49.547955 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 02:06:49.552599 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 02:06:49.556788 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:06:49.559540 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 02:06:49.563779 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:06:49.566503 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 02:06:49.572575 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 02:06:49.575593 systemd-journald[1203]: Time spent on flushing to /var/log/journal/e1da395963d047468112d79b2b1056b6 is 22.574ms for 975 entries. Mar 10 02:06:49.575593 systemd-journald[1203]: System Journal (/var/log/journal/e1da395963d047468112d79b2b1056b6) is 8M, max 195.6M, 187.6M free. Mar 10 02:06:49.618687 systemd-journald[1203]: Received client request to flush runtime journal. Mar 10 02:06:49.618751 kernel: loop0: detected capacity change from 0 to 110984 Mar 10 02:06:49.579039 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 02:06:49.583927 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 02:06:49.585277 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 02:06:49.590917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 02:06:49.598741 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 02:06:49.604617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 02:06:49.614597 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 10 02:06:49.622622 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 02:06:49.632901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 02:06:49.635229 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 10 02:06:49.635243 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Mar 10 02:06:49.640500 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 02:06:49.642110 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 02:06:49.649153 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 02:06:49.663472 kernel: loop1: detected capacity change from 0 to 219192 Mar 10 02:06:49.669190 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 10 02:06:49.697360 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 02:06:49.705179 kernel: loop2: detected capacity change from 0 to 128560 Mar 10 02:06:49.705731 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 02:06:49.728885 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 10 02:06:49.728917 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 10 02:06:49.735409 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 02:06:49.738927 kernel: loop3: detected capacity change from 0 to 110984 Mar 10 02:06:49.758447 kernel: loop4: detected capacity change from 0 to 219192 Mar 10 02:06:49.775485 kernel: loop5: detected capacity change from 0 to 128560 Mar 10 02:06:49.787782 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 02:06:49.788448 (sd-merge)[1263]: Merged extensions into '/usr'. Mar 10 02:06:49.794595 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 02:06:49.794740 systemd[1]: Reloading... Mar 10 02:06:49.858553 zram_generator::config[1292]: No configuration found. Mar 10 02:06:49.951103 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 02:06:50.067514 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 02:06:50.067899 systemd[1]: Reloading finished in 272 ms. Mar 10 02:06:50.100365 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 02:06:50.103609 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 02:06:50.107095 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 02:06:50.140867 systemd[1]: Starting ensure-sysext.service... Mar 10 02:06:50.144077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 02:06:50.147176 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 02:06:50.164035 systemd[1]: Reload requested from client PID 1328 ('systemctl') (unit ensure-sysext.service)... Mar 10 02:06:50.164060 systemd[1]: Reloading... Mar 10 02:06:50.175716 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 10 02:06:50.175779 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 10 02:06:50.176242 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 02:06:50.176753 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 02:06:50.178775 systemd-tmpfiles[1329]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 02:06:50.179151 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Mar 10 02:06:50.179397 systemd-tmpfiles[1329]: ACLs are not supported, ignoring. Mar 10 02:06:50.183722 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Mar 10 02:06:50.184597 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 02:06:50.184617 systemd-tmpfiles[1329]: Skipping /boot Mar 10 02:06:50.199186 systemd-tmpfiles[1329]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 02:06:50.201164 systemd-tmpfiles[1329]: Skipping /boot Mar 10 02:06:50.226478 zram_generator::config[1357]: No configuration found. Mar 10 02:06:50.377475 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 02:06:50.388519 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 10 02:06:50.405470 kernel: ACPI: button: Power Button [PWRF] Mar 10 02:06:50.428135 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 02:06:50.428498 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 02:06:50.454078 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 02:06:50.454220 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 02:06:50.459205 systemd[1]: Reloading finished in 294 ms. Mar 10 02:06:50.468233 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 02:06:50.472042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 02:06:50.550833 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:06:50.556083 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 02:06:50.563538 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 02:06:50.568678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:06:50.570366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:06:50.580187 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:06:50.584047 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:06:50.586560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:06:50.588618 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 02:06:50.591460 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:06:50.594732 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 02:06:50.599378 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 02:06:50.606368 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 02:06:50.611090 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 02:06:50.612501 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:06:50.615526 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:06:50.616079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:06:50.619672 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:06:50.620163 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:06:50.624532 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:06:50.625259 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:06:50.632014 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 02:06:50.649071 kernel: kvm_amd: TSC scaling supported Mar 10 02:06:50.649138 kernel: kvm_amd: Nested Virtualization enabled Mar 10 02:06:50.649159 kernel: kvm_amd: Nested Paging enabled Mar 10 02:06:50.651185 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 02:06:50.651227 kernel: kvm_amd: PMU virtualization is disabled Mar 10 02:06:50.650970 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 02:06:50.668567 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 02:06:50.681242 augenrules[1481]: No rules Mar 10 02:06:50.682964 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 02:06:50.687271 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 02:06:50.687875 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 02:06:50.704216 systemd[1]: Finished ensure-sysext.service. Mar 10 02:06:50.707223 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:06:50.707605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 02:06:50.709877 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 02:06:50.715676 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 02:06:50.721549 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 02:06:50.726488 kernel: EDAC MC: Ver: 3.0.0 Mar 10 02:06:50.726659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 02:06:50.729747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 02:06:50.729797 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 02:06:50.731926 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 02:06:50.735712 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 02:06:50.740186 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 02:06:50.743827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 02:06:50.746368 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 02:06:50.746491 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 02:06:50.747325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 02:06:50.749627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 02:06:50.752843 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 02:06:50.753076 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 02:06:50.757759 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 02:06:50.758028 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 02:06:50.761368 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 02:06:50.761650 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 02:06:50.767497 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 02:06:50.769173 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 02:06:50.769243 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 02:06:50.797483 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 02:06:50.874641 systemd-networkd[1459]: lo: Link UP Mar 10 02:06:50.874657 systemd-networkd[1459]: lo: Gained carrier Mar 10 02:06:50.875898 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 02:06:50.876619 systemd-networkd[1459]: Enumeration completed Mar 10 02:06:50.877164 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:06:50.877172 systemd-networkd[1459]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 02:06:50.877693 systemd-networkd[1459]: eth0: Link UP Mar 10 02:06:50.877903 systemd-networkd[1459]: eth0: Gained carrier Mar 10 02:06:50.877921 systemd-networkd[1459]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 02:06:50.883596 systemd-resolved[1460]: Positive Trust Anchors: Mar 10 02:06:50.883817 systemd-resolved[1460]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 02:06:50.883885 systemd-resolved[1460]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 02:06:50.887628 systemd-resolved[1460]: Defaulting to hostname 'linux'. Mar 10 02:06:50.894479 systemd-networkd[1459]: eth0: DHCPv4 address 10.0.0.149/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 02:06:50.895079 systemd-timesyncd[1496]: Network configuration changed, trying to establish connection. Mar 10 02:06:51.343412 systemd-resolved[1460]: Clock change detected. Flushing caches. Mar 10 02:06:51.343450 systemd-timesyncd[1496]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 02:06:51.343489 systemd-timesyncd[1496]: Initial clock synchronization to Tue 2026-03-10 02:06:51.343391 UTC. Mar 10 02:06:51.387267 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 02:06:51.390027 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 02:06:51.393318 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 02:06:51.396980 systemd[1]: Reached target network.target - Network. Mar 10 02:06:51.399256 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 02:06:51.402194 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 02:06:51.405020 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 02:06:51.408022 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 02:06:51.411042 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 10 02:06:51.413845 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 02:06:51.416853 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 02:06:51.416898 systemd[1]: Reached target paths.target - Path Units. Mar 10 02:06:51.419141 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 02:06:51.421746 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 02:06:51.424464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 02:06:51.427459 systemd[1]: Reached target timers.target - Timer Units. Mar 10 02:06:51.430563 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 02:06:51.434632 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 02:06:51.438670 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 10 02:06:51.441761 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 10 02:06:51.444663 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 10 02:06:51.449046 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 02:06:51.451758 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 10 02:06:51.455764 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 10 02:06:51.459556 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 02:06:51.463357 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 02:06:51.466775 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 02:06:51.469390 systemd[1]: Reached target basic.target - Basic System. Mar 10 02:06:51.472151 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 02:06:51.472206 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 02:06:51.473282 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 02:06:51.477436 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 02:06:51.479777 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 02:06:51.489013 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 02:06:51.492734 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 02:06:51.495429 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 02:06:51.496499 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 10 02:06:51.501210 jq[1527]: false Mar 10 02:06:51.501712 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 02:06:51.507182 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 02:06:51.508225 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing passwd entry cache Mar 10 02:06:51.508435 oslogin_cache_refresh[1529]: Refreshing passwd entry cache Mar 10 02:06:51.510612 extend-filesystems[1528]: Found /dev/vda6 Mar 10 02:06:51.513015 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 02:06:51.516510 extend-filesystems[1528]: Found /dev/vda9 Mar 10 02:06:51.521350 extend-filesystems[1528]: Checking size of /dev/vda9 Mar 10 02:06:51.517345 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 02:06:51.526246 oslogin_cache_refresh[1529]: Failure getting users, quitting Mar 10 02:06:51.526359 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting users, quitting Mar 10 02:06:51.526359 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 10 02:06:51.526359 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Refreshing group entry cache Mar 10 02:06:51.526262 oslogin_cache_refresh[1529]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 10 02:06:51.526302 oslogin_cache_refresh[1529]: Refreshing group entry cache Mar 10 02:06:51.527853 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 02:06:51.531387 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 02:06:51.531788 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 02:06:51.533625 extend-filesystems[1528]: Resized partition /dev/vda9 Mar 10 02:06:51.533968 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 02:06:51.540223 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Failure getting groups, quitting Mar 10 02:06:51.540223 google_oslogin_nss_cache[1529]: oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 10 02:06:51.540123 oslogin_cache_refresh[1529]: Failure getting groups, quitting Mar 10 02:06:51.540135 oslogin_cache_refresh[1529]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 10 02:06:51.541271 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 02:06:51.547226 extend-filesystems[1553]: resize2fs 1.47.3 (8-Jul-2025) Mar 10 02:06:51.551576 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 10 02:06:51.556168 jq[1554]: true Mar 10 02:06:51.558533 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 02:06:51.567401 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 02:06:51.564274 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 02:06:51.564537 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 02:06:51.564876 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 10 02:06:51.565286 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 10 02:06:51.569032 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 02:06:51.569423 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 02:06:51.574441 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 02:06:51.574741 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 02:06:51.578715 update_engine[1551]: I20260310 02:06:51.578599 1551 main.cc:92] Flatcar Update Engine starting Mar 10 02:06:51.601100 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 02:06:51.604925 jq[1557]: true Mar 10 02:06:51.607665 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 02:06:51.619653 extend-filesystems[1553]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 02:06:51.619653 extend-filesystems[1553]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 02:06:51.619653 extend-filesystems[1553]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 02:06:51.628503 extend-filesystems[1528]: Resized filesystem in /dev/vda9 Mar 10 02:06:51.640708 update_engine[1551]: I20260310 02:06:51.628021 1551 update_check_scheduler.cc:74] Next update check in 7m59s Mar 10 02:06:51.625167 dbus-daemon[1525]: [system] SELinux support is enabled Mar 10 02:06:51.622303 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 02:06:51.622556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 02:06:51.634103 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 02:06:51.636028 systemd-logind[1546]: Watching system buttons on /dev/input/event2 (Power Button) Mar 10 02:06:51.636049 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 02:06:51.636993 systemd-logind[1546]: New seat seat0. Mar 10 02:06:51.639614 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 02:06:51.649344 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 02:06:51.649657 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 02:06:51.651182 tar[1556]: linux-amd64/LICENSE Mar 10 02:06:51.651182 tar[1556]: linux-amd64/helm Mar 10 02:06:51.652010 dbus-daemon[1525]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 10 02:06:51.653031 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 02:06:51.653051 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 02:06:51.656663 systemd[1]: Started update-engine.service - Update Engine. Mar 10 02:06:51.661877 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 02:06:51.700466 bash[1590]: Updated "/home/core/.ssh/authorized_keys" Mar 10 02:06:51.701801 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 02:06:51.708755 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 02:06:51.722584 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 02:06:51.779404 containerd[1559]: time="2026-03-10T02:06:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 10 02:06:51.780315 containerd[1559]: time="2026-03-10T02:06:51.780113254Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 10 02:06:51.788521 containerd[1559]: time="2026-03-10T02:06:51.788478849Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="6.633µs" Mar 10 02:06:51.788521 containerd[1559]: time="2026-03-10T02:06:51.788506751Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 10 02:06:51.788521 containerd[1559]: time="2026-03-10T02:06:51.788521629Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 10 02:06:51.788697 containerd[1559]: time="2026-03-10T02:06:51.788661490Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 10 02:06:51.788697 containerd[1559]: time="2026-03-10T02:06:51.788689342Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 10 02:06:51.788736 containerd[1559]: time="2026-03-10T02:06:51.788710702Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 02:06:51.788803 containerd[1559]: time="2026-03-10T02:06:51.788769071Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 02:06:51.788803 containerd[1559]: time="2026-03-10T02:06:51.788792565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789113 containerd[1559]: time="2026-03-10T02:06:51.789049374Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789113 containerd[1559]: time="2026-03-10T02:06:51.789097133Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789113 containerd[1559]: time="2026-03-10T02:06:51.789108224Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789113 containerd[1559]: time="2026-03-10T02:06:51.789114946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789225 containerd[1559]: time="2026-03-10T02:06:51.789198171Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789485 containerd[1559]: time="2026-03-10T02:06:51.789423482Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789512 containerd[1559]: time="2026-03-10T02:06:51.789487722Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 02:06:51.789512 containerd[1559]: time="2026-03-10T02:06:51.789502320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 10 02:06:51.789611 containerd[1559]: time="2026-03-10T02:06:51.789565157Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 10 02:06:51.789904 containerd[1559]: time="2026-03-10T02:06:51.789871769Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 10 02:06:51.790009 containerd[1559]: time="2026-03-10T02:06:51.789973369Z" level=info msg="metadata content store policy set" policy=shared Mar 10 02:06:51.795157 containerd[1559]: time="2026-03-10T02:06:51.795042039Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 10 02:06:51.795157 containerd[1559]: time="2026-03-10T02:06:51.795150382Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 10 02:06:51.795221 containerd[1559]: time="2026-03-10T02:06:51.795168465Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 10 02:06:51.795221 containerd[1559]: time="2026-03-10T02:06:51.795183854Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 10 02:06:51.795221 containerd[1559]: time="2026-03-10T02:06:51.795206917Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 10 02:06:51.795269 containerd[1559]: time="2026-03-10T02:06:51.795222777Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 10 02:06:51.795269 containerd[1559]: time="2026-03-10T02:06:51.795239608Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 10 02:06:51.795269 containerd[1559]: time="2026-03-10T02:06:51.795254536Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 10 02:06:51.795321 containerd[1559]: time="2026-03-10T02:06:51.795271578Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 10 02:06:51.795321 containerd[1559]: time="2026-03-10T02:06:51.795279893Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 10 02:06:51.795321 containerd[1559]: time="2026-03-10T02:06:51.795287177Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 10 02:06:51.795321 containerd[1559]: time="2026-03-10T02:06:51.795297596Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 10 02:06:51.795428 containerd[1559]: time="2026-03-10T02:06:51.795397112Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 10 02:06:51.795428 containerd[1559]: time="2026-03-10T02:06:51.795417871Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 10 02:06:51.795472 containerd[1559]: time="2026-03-10T02:06:51.795429583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 10 02:06:51.795472 containerd[1559]: time="2026-03-10T02:06:51.795439571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 10 02:06:51.795472 containerd[1559]: time="2026-03-10T02:06:51.795453708Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 10 02:06:51.795472 containerd[1559]: time="2026-03-10T02:06:51.795462444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 10 02:06:51.795472 containerd[1559]: time="2026-03-10T02:06:51.795471301Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 10 02:06:51.795557 containerd[1559]: time="2026-03-10T02:06:51.795479416Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 10 02:06:51.795557 containerd[1559]: time="2026-03-10T02:06:51.795488342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 10 02:06:51.795557 containerd[1559]: time="2026-03-10T02:06:51.795496768Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 10 02:06:51.795557 containerd[1559]: time="2026-03-10T02:06:51.795505014Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 10 02:06:51.795557 containerd[1559]: time="2026-03-10T02:06:51.795539468Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 10 02:06:51.795557 containerd[1559]: time="2026-03-10T02:06:51.795553363Z" level=info msg="Start snapshots syncer" Mar 10 02:06:51.795748 containerd[1559]: time="2026-03-10T02:06:51.795602054Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 10 02:06:51.795816 containerd[1559]: time="2026-03-10T02:06:51.795775017Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 10 02:06:51.795968 containerd[1559]: time="2026-03-10T02:06:51.795845399Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 10 02:06:51.797125 containerd[1559]: time="2026-03-10T02:06:51.797003771Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 10 02:06:51.797200 containerd[1559]: time="2026-03-10T02:06:51.797153340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 10 02:06:51.797200 containerd[1559]: time="2026-03-10T02:06:51.797191020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 10 02:06:51.797250 containerd[1559]: time="2026-03-10T02:06:51.797210527Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 10 02:06:51.797250 containerd[1559]: time="2026-03-10T02:06:51.797226417Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 10 02:06:51.797250 containerd[1559]: time="2026-03-10T02:06:51.797241905Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 10 02:06:51.797297 containerd[1559]: time="2026-03-10T02:06:51.797257324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 10 02:06:51.797297 containerd[1559]: time="2026-03-10T02:06:51.797273234Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 10 02:06:51.797334 containerd[1559]: time="2026-03-10T02:06:51.797302048Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 10 02:06:51.797334 containerd[1559]: time="2026-03-10T02:06:51.797317036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 10 02:06:51.797334 containerd[1559]: time="2026-03-10T02:06:51.797330691Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 10 02:06:51.797382 containerd[1559]: time="2026-03-10T02:06:51.797367600Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 02:06:51.797400 containerd[1559]: time="2026-03-10T02:06:51.797382989Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 02:06:51.797400 containerd[1559]: time="2026-03-10T02:06:51.797394901Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 02:06:51.797433 containerd[1559]: time="2026-03-10T02:06:51.797406493Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 02:06:51.797433 containerd[1559]: time="2026-03-10T02:06:51.797416742Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 10 02:06:51.797471 containerd[1559]: time="2026-03-10T02:06:51.797429055Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 10 02:06:51.797471 containerd[1559]: time="2026-03-10T02:06:51.797452649Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 10 02:06:51.797504 containerd[1559]: time="2026-03-10T02:06:51.797474921Z" level=info msg="runtime interface created" Mar 10 02:06:51.797504 containerd[1559]: time="2026-03-10T02:06:51.797482845Z" level=info msg="created NRI interface" Mar 10 02:06:51.797504 containerd[1559]: time="2026-03-10T02:06:51.797495288Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 10 02:06:51.797555 containerd[1559]: time="2026-03-10T02:06:51.797504866Z" level=info msg="Connect containerd service" Mar 10 02:06:51.797555 containerd[1559]: time="2026-03-10T02:06:51.797521187Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 02:06:51.798277 containerd[1559]: time="2026-03-10T02:06:51.798227956Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 02:06:51.863162 containerd[1559]: time="2026-03-10T02:06:51.863122098Z" level=info msg="Start subscribing containerd event" Mar 10 02:06:51.863459 containerd[1559]: time="2026-03-10T02:06:51.863323684Z" level=info msg="Start recovering state" Mar 10 02:06:51.863612 containerd[1559]: time="2026-03-10T02:06:51.863361221Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 02:06:51.863714 containerd[1559]: time="2026-03-10T02:06:51.863700157Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 02:06:51.865179 containerd[1559]: time="2026-03-10T02:06:51.865145996Z" level=info msg="Start event monitor" Mar 10 02:06:51.865236 containerd[1559]: time="2026-03-10T02:06:51.865225434Z" level=info msg="Start cni network conf syncer for default" Mar 10 02:06:51.865282 containerd[1559]: time="2026-03-10T02:06:51.865266921Z" level=info msg="Start streaming server" Mar 10 02:06:51.865328 containerd[1559]: time="2026-03-10T02:06:51.865318067Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 10 02:06:51.865381 containerd[1559]: time="2026-03-10T02:06:51.865370124Z" level=info msg="runtime interface starting up..." Mar 10 02:06:51.865519 containerd[1559]: time="2026-03-10T02:06:51.865445261Z" level=info msg="starting plugins..." Mar 10 02:06:51.865589 containerd[1559]: time="2026-03-10T02:06:51.865575514Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 10 02:06:51.865877 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 02:06:51.867023 containerd[1559]: time="2026-03-10T02:06:51.867007397Z" level=info msg="containerd successfully booted in 0.088259s" Mar 10 02:06:51.937451 tar[1556]: linux-amd64/README.md Mar 10 02:06:51.962274 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 02:06:52.009559 sshd_keygen[1552]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 02:06:52.031432 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 02:06:52.035693 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 02:06:52.060215 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 02:06:52.060476 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 02:06:52.064496 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 02:06:52.090690 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 02:06:52.094990 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 02:06:52.098452 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 02:06:52.101317 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 02:06:52.547426 systemd-networkd[1459]: eth0: Gained IPv6LL Mar 10 02:06:52.550381 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 02:06:52.553912 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 02:06:52.557992 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 02:06:52.561903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:06:52.565478 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 02:06:52.591287 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 02:06:52.591587 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 02:06:52.595269 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 02:06:52.599030 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 02:06:53.288412 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:06:53.292879 (kubelet)[1660]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:06:53.293234 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 02:06:53.296672 systemd[1]: Startup finished in 3.192s (kernel) + 6.719s (initrd) + 4.529s (userspace) = 14.441s. Mar 10 02:06:53.675361 kubelet[1660]: E0310 02:06:53.675275 1660 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:06:53.678601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:06:53.678822 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:06:53.679277 systemd[1]: kubelet.service: Consumed 849ms CPU time, 258.2M memory peak. Mar 10 02:06:55.681123 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 02:06:55.682294 systemd[1]: Started sshd@0-10.0.0.149:22-10.0.0.1:56690.service - OpenSSH per-connection server daemon (10.0.0.1:56690). Mar 10 02:06:55.756662 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 56690 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:55.758867 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:55.766666 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 02:06:55.767989 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 02:06:55.775249 systemd-logind[1546]: New session 1 of user core. Mar 10 02:06:55.796383 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 02:06:55.800546 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 02:06:55.822825 (systemd)[1679]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 02:06:55.826819 systemd-logind[1546]: New session c1 of user core. Mar 10 02:06:55.956639 systemd[1679]: Queued start job for default target default.target. Mar 10 02:06:55.979622 systemd[1679]: Created slice app.slice - User Application Slice. Mar 10 02:06:55.979664 systemd[1679]: Reached target paths.target - Paths. Mar 10 02:06:55.979722 systemd[1679]: Reached target timers.target - Timers. Mar 10 02:06:55.981292 systemd[1679]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 02:06:55.993139 systemd[1679]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 02:06:55.993308 systemd[1679]: Reached target sockets.target - Sockets. Mar 10 02:06:55.993393 systemd[1679]: Reached target basic.target - Basic System. Mar 10 02:06:55.993442 systemd[1679]: Reached target default.target - Main User Target. Mar 10 02:06:55.993497 systemd[1679]: Startup finished in 158ms. Mar 10 02:06:55.993517 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 02:06:55.995093 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 02:06:56.006005 systemd[1]: Started sshd@1-10.0.0.149:22-10.0.0.1:56694.service - OpenSSH per-connection server daemon (10.0.0.1:56694). Mar 10 02:06:56.061088 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 56694 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:56.062463 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:56.067864 systemd-logind[1546]: New session 2 of user core. Mar 10 02:06:56.077232 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 02:06:56.090827 sshd[1693]: Connection closed by 10.0.0.1 port 56694 Mar 10 02:06:56.091361 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Mar 10 02:06:56.101586 systemd[1]: sshd@1-10.0.0.149:22-10.0.0.1:56694.service: Deactivated successfully. Mar 10 02:06:56.103379 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 02:06:56.105591 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. Mar 10 02:06:56.106598 systemd[1]: Started sshd@2-10.0.0.149:22-10.0.0.1:56700.service - OpenSSH per-connection server daemon (10.0.0.1:56700). Mar 10 02:06:56.108236 systemd-logind[1546]: Removed session 2. Mar 10 02:06:56.167432 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 56700 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:56.168824 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:56.174200 systemd-logind[1546]: New session 3 of user core. Mar 10 02:06:56.181213 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 02:06:56.189191 sshd[1702]: Connection closed by 10.0.0.1 port 56700 Mar 10 02:06:56.189531 sshd-session[1699]: pam_unix(sshd:session): session closed for user core Mar 10 02:06:56.206952 systemd[1]: sshd@2-10.0.0.149:22-10.0.0.1:56700.service: Deactivated successfully. Mar 10 02:06:56.209625 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 02:06:56.210899 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. Mar 10 02:06:56.215265 systemd[1]: Started sshd@3-10.0.0.149:22-10.0.0.1:56712.service - OpenSSH per-connection server daemon (10.0.0.1:56712). Mar 10 02:06:56.216018 systemd-logind[1546]: Removed session 3. Mar 10 02:06:56.275466 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 56712 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:56.276906 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:56.283363 systemd-logind[1546]: New session 4 of user core. Mar 10 02:06:56.293263 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 02:06:56.311177 sshd[1711]: Connection closed by 10.0.0.1 port 56712 Mar 10 02:06:56.311698 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Mar 10 02:06:56.321531 systemd[1]: sshd@3-10.0.0.149:22-10.0.0.1:56712.service: Deactivated successfully. Mar 10 02:06:56.323469 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 02:06:56.324478 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. Mar 10 02:06:56.328175 systemd[1]: Started sshd@4-10.0.0.149:22-10.0.0.1:56724.service - OpenSSH per-connection server daemon (10.0.0.1:56724). Mar 10 02:06:56.329117 systemd-logind[1546]: Removed session 4. Mar 10 02:06:56.396286 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 56724 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:56.398014 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:56.403342 systemd-logind[1546]: New session 5 of user core. Mar 10 02:06:56.420210 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 02:06:56.440755 sudo[1721]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 02:06:56.441278 sudo[1721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:06:56.465595 sudo[1721]: pam_unix(sudo:session): session closed for user root Mar 10 02:06:56.467524 sshd[1720]: Connection closed by 10.0.0.1 port 56724 Mar 10 02:06:56.467912 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 10 02:06:56.476467 systemd[1]: sshd@4-10.0.0.149:22-10.0.0.1:56724.service: Deactivated successfully. Mar 10 02:06:56.478262 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 02:06:56.479343 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. Mar 10 02:06:56.481851 systemd[1]: Started sshd@5-10.0.0.149:22-10.0.0.1:56736.service - OpenSSH per-connection server daemon (10.0.0.1:56736). Mar 10 02:06:56.483137 systemd-logind[1546]: Removed session 5. Mar 10 02:06:56.553851 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 56736 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:56.555629 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:56.560777 systemd-logind[1546]: New session 6 of user core. Mar 10 02:06:56.573223 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 02:06:56.587615 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 02:06:56.588001 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:06:56.595358 sudo[1732]: pam_unix(sudo:session): session closed for user root Mar 10 02:06:56.601655 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 10 02:06:56.602004 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:06:56.612762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 02:06:56.662456 augenrules[1754]: No rules Mar 10 02:06:56.663904 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 02:06:56.664262 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 02:06:56.665244 sudo[1731]: pam_unix(sudo:session): session closed for user root Mar 10 02:06:56.666799 sshd[1730]: Connection closed by 10.0.0.1 port 56736 Mar 10 02:06:56.667325 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Mar 10 02:06:56.675282 systemd[1]: sshd@5-10.0.0.149:22-10.0.0.1:56736.service: Deactivated successfully. Mar 10 02:06:56.677041 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 02:06:56.678160 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. Mar 10 02:06:56.680517 systemd[1]: Started sshd@6-10.0.0.149:22-10.0.0.1:56740.service - OpenSSH per-connection server daemon (10.0.0.1:56740). Mar 10 02:06:56.681717 systemd-logind[1546]: Removed session 6. Mar 10 02:06:56.743002 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 56740 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:06:56.745114 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:06:56.750719 systemd-logind[1546]: New session 7 of user core. Mar 10 02:06:56.760245 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 02:06:56.773656 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 02:06:56.774026 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 02:06:57.071954 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 02:06:57.093556 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 02:06:57.326435 dockerd[1787]: time="2026-03-10T02:06:57.326282329Z" level=info msg="Starting up" Mar 10 02:06:57.327820 dockerd[1787]: time="2026-03-10T02:06:57.327792869Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 10 02:06:57.340592 dockerd[1787]: time="2026-03-10T02:06:57.340522309Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 10 02:06:57.538755 dockerd[1787]: time="2026-03-10T02:06:57.538675731Z" level=info msg="Loading containers: start." Mar 10 02:06:57.550133 kernel: Initializing XFRM netlink socket Mar 10 02:06:57.899850 systemd-networkd[1459]: docker0: Link UP Mar 10 02:06:57.905679 dockerd[1787]: time="2026-03-10T02:06:57.905608539Z" level=info msg="Loading containers: done." Mar 10 02:06:57.921006 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck290410634-merged.mount: Deactivated successfully. Mar 10 02:06:57.924543 dockerd[1787]: time="2026-03-10T02:06:57.924478604Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 02:06:57.924616 dockerd[1787]: time="2026-03-10T02:06:57.924583931Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 10 02:06:57.924723 dockerd[1787]: time="2026-03-10T02:06:57.924678007Z" level=info msg="Initializing buildkit" Mar 10 02:06:57.960866 dockerd[1787]: time="2026-03-10T02:06:57.960733552Z" level=info msg="Completed buildkit initialization" Mar 10 02:06:57.964363 dockerd[1787]: time="2026-03-10T02:06:57.964303075Z" level=info msg="Daemon has completed initialization" Mar 10 02:06:57.964535 dockerd[1787]: time="2026-03-10T02:06:57.964427297Z" level=info msg="API listen on /run/docker.sock" Mar 10 02:06:57.964575 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 02:06:58.396246 containerd[1559]: time="2026-03-10T02:06:58.396207726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 10 02:06:58.854842 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034708474.mount: Deactivated successfully. Mar 10 02:06:59.808526 containerd[1559]: time="2026-03-10T02:06:59.808423071Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:06:59.809168 containerd[1559]: time="2026-03-10T02:06:59.809097663Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 10 02:06:59.810420 containerd[1559]: time="2026-03-10T02:06:59.810358464Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:06:59.814944 containerd[1559]: time="2026-03-10T02:06:59.814879833Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 1.418631511s" Mar 10 02:06:59.814944 containerd[1559]: time="2026-03-10T02:06:59.814921280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 10 02:06:59.815747 containerd[1559]: time="2026-03-10T02:06:59.815621366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 10 02:06:59.815948 containerd[1559]: time="2026-03-10T02:06:59.815666261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:00.801077 containerd[1559]: time="2026-03-10T02:07:00.800972803Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:00.802084 containerd[1559]: time="2026-03-10T02:07:00.802010096Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 10 02:07:00.803404 containerd[1559]: time="2026-03-10T02:07:00.803337694Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:00.806693 containerd[1559]: time="2026-03-10T02:07:00.806620261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:00.807516 containerd[1559]: time="2026-03-10T02:07:00.807445000Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 991.617148ms" Mar 10 02:07:00.807516 containerd[1559]: time="2026-03-10T02:07:00.807485145Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 10 02:07:00.808011 containerd[1559]: time="2026-03-10T02:07:00.807933773Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 10 02:07:01.599689 containerd[1559]: time="2026-03-10T02:07:01.599598033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:01.600487 containerd[1559]: time="2026-03-10T02:07:01.600459261Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 10 02:07:01.601839 containerd[1559]: time="2026-03-10T02:07:01.601799570Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:01.604466 containerd[1559]: time="2026-03-10T02:07:01.604400580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:01.605050 containerd[1559]: time="2026-03-10T02:07:01.604979286Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 797.006751ms" Mar 10 02:07:01.605050 containerd[1559]: time="2026-03-10T02:07:01.605043145Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 10 02:07:01.605626 containerd[1559]: time="2026-03-10T02:07:01.605504686Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 10 02:07:02.548513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3376394544.mount: Deactivated successfully. Mar 10 02:07:02.758619 containerd[1559]: time="2026-03-10T02:07:02.758550747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:02.759414 containerd[1559]: time="2026-03-10T02:07:02.759344470Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 10 02:07:02.760335 containerd[1559]: time="2026-03-10T02:07:02.760282341Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:02.762339 containerd[1559]: time="2026-03-10T02:07:02.762287752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:02.762806 containerd[1559]: time="2026-03-10T02:07:02.762751309Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.157225172s" Mar 10 02:07:02.762806 containerd[1559]: time="2026-03-10T02:07:02.762790662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 10 02:07:02.763407 containerd[1559]: time="2026-03-10T02:07:02.763303660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 10 02:07:03.181502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808064505.mount: Deactivated successfully. Mar 10 02:07:03.886295 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 02:07:03.888051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:07:04.070258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:04.075468 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 02:07:04.121028 kubelet[2139]: E0310 02:07:04.120918 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 02:07:04.123208 containerd[1559]: time="2026-03-10T02:07:04.123116190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:04.124268 containerd[1559]: time="2026-03-10T02:07:04.124232210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 10 02:07:04.126359 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 02:07:04.126671 containerd[1559]: time="2026-03-10T02:07:04.125488939Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:04.126591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 02:07:04.126971 systemd[1]: kubelet.service: Consumed 212ms CPU time, 108.9M memory peak. Mar 10 02:07:04.128478 containerd[1559]: time="2026-03-10T02:07:04.128440122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:04.129490 containerd[1559]: time="2026-03-10T02:07:04.129447775Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.366101625s" Mar 10 02:07:04.129490 containerd[1559]: time="2026-03-10T02:07:04.129488070Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 10 02:07:04.130180 containerd[1559]: time="2026-03-10T02:07:04.130144485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 02:07:04.505901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4006622506.mount: Deactivated successfully. Mar 10 02:07:04.513915 containerd[1559]: time="2026-03-10T02:07:04.513828035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:04.514586 containerd[1559]: time="2026-03-10T02:07:04.514561491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 02:07:04.515946 containerd[1559]: time="2026-03-10T02:07:04.515875179Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:04.519430 containerd[1559]: time="2026-03-10T02:07:04.519381162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:04.520240 containerd[1559]: time="2026-03-10T02:07:04.520193579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 390.011213ms" Mar 10 02:07:04.520240 containerd[1559]: time="2026-03-10T02:07:04.520229035Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 02:07:04.520935 containerd[1559]: time="2026-03-10T02:07:04.520812874Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 10 02:07:04.968344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4250133082.mount: Deactivated successfully. Mar 10 02:07:05.779301 containerd[1559]: time="2026-03-10T02:07:05.779228664Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:05.780163 containerd[1559]: time="2026-03-10T02:07:05.780096706Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 10 02:07:05.781513 containerd[1559]: time="2026-03-10T02:07:05.781459911Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:05.783855 containerd[1559]: time="2026-03-10T02:07:05.783819855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:05.784850 containerd[1559]: time="2026-03-10T02:07:05.784798520Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 1.26395017s" Mar 10 02:07:05.784850 containerd[1559]: time="2026-03-10T02:07:05.784838144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 10 02:07:09.005721 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:09.005887 systemd[1]: kubelet.service: Consumed 212ms CPU time, 108.9M memory peak. Mar 10 02:07:09.008210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:07:09.038400 systemd[1]: Reload requested from client PID 2243 ('systemctl') (unit session-7.scope)... Mar 10 02:07:09.038411 systemd[1]: Reloading... Mar 10 02:07:09.126103 zram_generator::config[2286]: No configuration found. Mar 10 02:07:09.320783 systemd[1]: Reloading finished in 281 ms. Mar 10 02:07:09.387023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:09.391125 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:07:09.394802 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 02:07:09.395242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:09.395309 systemd[1]: kubelet.service: Consumed 142ms CPU time, 98.3M memory peak. Mar 10 02:07:09.397206 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:07:09.572632 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:09.576868 (kubelet)[2335]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 02:07:09.619179 kubelet[2335]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 02:07:09.619179 kubelet[2335]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 02:07:09.619589 kubelet[2335]: I0310 02:07:09.619312 2335 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 02:07:09.784317 kubelet[2335]: I0310 02:07:09.784249 2335 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 02:07:09.784317 kubelet[2335]: I0310 02:07:09.784305 2335 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 02:07:09.784392 kubelet[2335]: I0310 02:07:09.784338 2335 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 02:07:09.784392 kubelet[2335]: I0310 02:07:09.784352 2335 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 02:07:09.784752 kubelet[2335]: I0310 02:07:09.784693 2335 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 02:07:09.835348 kubelet[2335]: E0310 02:07:09.835217 2335 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.149:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 02:07:09.836200 kubelet[2335]: I0310 02:07:09.836171 2335 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 02:07:09.842297 kubelet[2335]: I0310 02:07:09.842266 2335 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 02:07:09.848297 kubelet[2335]: I0310 02:07:09.848231 2335 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 02:07:09.849649 kubelet[2335]: I0310 02:07:09.849579 2335 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 02:07:09.849800 kubelet[2335]: I0310 02:07:09.849624 2335 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 02:07:09.849800 kubelet[2335]: I0310 02:07:09.849774 2335 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 02:07:09.849800 kubelet[2335]: I0310 02:07:09.849784 2335 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 02:07:09.849975 kubelet[2335]: I0310 02:07:09.849864 2335 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 02:07:09.853035 kubelet[2335]: I0310 02:07:09.852984 2335 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:07:09.853234 kubelet[2335]: I0310 02:07:09.853200 2335 kubelet.go:475] "Attempting to sync node with API server" Mar 10 02:07:09.853234 kubelet[2335]: I0310 02:07:09.853222 2335 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 02:07:09.853290 kubelet[2335]: I0310 02:07:09.853240 2335 kubelet.go:387] "Adding apiserver pod source" Mar 10 02:07:09.853290 kubelet[2335]: I0310 02:07:09.853251 2335 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 02:07:09.853880 kubelet[2335]: E0310 02:07:09.853821 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.149:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 10 02:07:09.853990 kubelet[2335]: E0310 02:07:09.853840 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.149:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 10 02:07:09.854754 kubelet[2335]: I0310 02:07:09.854723 2335 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 02:07:09.855352 kubelet[2335]: I0310 02:07:09.855309 2335 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 02:07:09.855352 kubelet[2335]: I0310 02:07:09.855349 2335 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 02:07:09.855417 kubelet[2335]: W0310 02:07:09.855393 2335 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 02:07:09.858884 kubelet[2335]: I0310 02:07:09.858860 2335 server.go:1262] "Started kubelet" Mar 10 02:07:09.860124 kubelet[2335]: I0310 02:07:09.860019 2335 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 02:07:09.860181 kubelet[2335]: I0310 02:07:09.860155 2335 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 02:07:09.860918 kubelet[2335]: I0310 02:07:09.860270 2335 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 02:07:09.860918 kubelet[2335]: I0310 02:07:09.860767 2335 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 02:07:09.860918 kubelet[2335]: I0310 02:07:09.860817 2335 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 02:07:09.862810 kubelet[2335]: I0310 02:07:09.862102 2335 server.go:310] "Adding debug handlers to kubelet server" Mar 10 02:07:09.863923 kubelet[2335]: I0310 02:07:09.863841 2335 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 02:07:09.864117 kubelet[2335]: E0310 02:07:09.864028 2335 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 02:07:09.864186 kubelet[2335]: E0310 02:07:09.864148 2335 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 02:07:09.864186 kubelet[2335]: I0310 02:07:09.864181 2335 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 02:07:09.864269 kubelet[2335]: I0310 02:07:09.864256 2335 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 02:07:09.864444 kubelet[2335]: I0310 02:07:09.864338 2335 reconciler.go:29] "Reconciler: start to sync state" Mar 10 02:07:09.865120 kubelet[2335]: E0310 02:07:09.864698 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="200ms" Mar 10 02:07:09.865120 kubelet[2335]: E0310 02:07:09.862850 2335 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.149:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.149:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b58b8fdfc66bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 02:07:09.858825916 +0000 UTC m=+0.277823905,LastTimestamp:2026-03-10 02:07:09.858825916 +0000 UTC m=+0.277823905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 02:07:09.865120 kubelet[2335]: E0310 02:07:09.864971 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.149:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 10 02:07:09.865767 kubelet[2335]: I0310 02:07:09.865753 2335 factory.go:223] Registration of the systemd container factory successfully Mar 10 02:07:09.865849 kubelet[2335]: I0310 02:07:09.865820 2335 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 02:07:09.866938 kubelet[2335]: I0310 02:07:09.866916 2335 factory.go:223] Registration of the containerd container factory successfully Mar 10 02:07:09.876665 kubelet[2335]: I0310 02:07:09.876620 2335 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 02:07:09.876665 kubelet[2335]: I0310 02:07:09.876649 2335 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 02:07:09.876665 kubelet[2335]: I0310 02:07:09.876661 2335 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:07:09.878951 kubelet[2335]: I0310 02:07:09.878927 2335 policy_none.go:49] "None policy: Start" Mar 10 02:07:09.878951 kubelet[2335]: I0310 02:07:09.878953 2335 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 02:07:09.879018 kubelet[2335]: I0310 02:07:09.878964 2335 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 02:07:09.880358 kubelet[2335]: I0310 02:07:09.880320 2335 policy_none.go:47] "Start" Mar 10 02:07:09.884728 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 02:07:09.886937 kubelet[2335]: I0310 02:07:09.886892 2335 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 02:07:09.888566 kubelet[2335]: I0310 02:07:09.888526 2335 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 02:07:09.888566 kubelet[2335]: I0310 02:07:09.888549 2335 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 02:07:09.888566 kubelet[2335]: I0310 02:07:09.888570 2335 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 02:07:09.888648 kubelet[2335]: E0310 02:07:09.888609 2335 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 02:07:09.891153 kubelet[2335]: E0310 02:07:09.891120 2335 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.149:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.149:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 10 02:07:09.895594 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 02:07:09.898764 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 02:07:09.910163 kubelet[2335]: E0310 02:07:09.910117 2335 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 02:07:09.910484 kubelet[2335]: I0310 02:07:09.910330 2335 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 02:07:09.910484 kubelet[2335]: I0310 02:07:09.910362 2335 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 02:07:09.910714 kubelet[2335]: I0310 02:07:09.910680 2335 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 02:07:09.911906 kubelet[2335]: E0310 02:07:09.911845 2335 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 02:07:09.911906 kubelet[2335]: E0310 02:07:09.911897 2335 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 02:07:10.000602 systemd[1]: Created slice kubepods-burstable-poddeac8424ef8d294886b1d3b592ae1e33.slice - libcontainer container kubepods-burstable-poddeac8424ef8d294886b1d3b592ae1e33.slice. Mar 10 02:07:10.012336 kubelet[2335]: I0310 02:07:10.012248 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 02:07:10.012720 kubelet[2335]: E0310 02:07:10.012672 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Mar 10 02:07:10.014124 kubelet[2335]: E0310 02:07:10.014021 2335 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:07:10.018237 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 10 02:07:10.020408 kubelet[2335]: E0310 02:07:10.020306 2335 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:07:10.023221 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 10 02:07:10.025236 kubelet[2335]: E0310 02:07:10.025208 2335 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:07:10.066333 kubelet[2335]: E0310 02:07:10.066274 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="400ms" Mar 10 02:07:10.166578 kubelet[2335]: I0310 02:07:10.166503 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/deac8424ef8d294886b1d3b592ae1e33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"deac8424ef8d294886b1d3b592ae1e33\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:10.166578 kubelet[2335]: I0310 02:07:10.166553 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:10.166743 kubelet[2335]: I0310 02:07:10.166616 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:10.166743 kubelet[2335]: I0310 02:07:10.166636 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:10.166743 kubelet[2335]: I0310 02:07:10.166650 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:10.166743 kubelet[2335]: I0310 02:07:10.166662 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/deac8424ef8d294886b1d3b592ae1e33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"deac8424ef8d294886b1d3b592ae1e33\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:10.166743 kubelet[2335]: I0310 02:07:10.166676 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/deac8424ef8d294886b1d3b592ae1e33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"deac8424ef8d294886b1d3b592ae1e33\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:10.166837 kubelet[2335]: I0310 02:07:10.166691 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:10.166837 kubelet[2335]: I0310 02:07:10.166705 2335 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:10.215483 kubelet[2335]: I0310 02:07:10.215400 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 02:07:10.215783 kubelet[2335]: E0310 02:07:10.215741 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Mar 10 02:07:10.319204 containerd[1559]: time="2026-03-10T02:07:10.319007356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:deac8424ef8d294886b1d3b592ae1e33,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:10.324183 containerd[1559]: time="2026-03-10T02:07:10.324114502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:10.328364 containerd[1559]: time="2026-03-10T02:07:10.328326063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:10.467154 kubelet[2335]: E0310 02:07:10.466966 2335 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.149:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.149:6443: connect: connection refused" interval="800ms" Mar 10 02:07:10.617990 kubelet[2335]: I0310 02:07:10.617908 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 02:07:10.618524 kubelet[2335]: E0310 02:07:10.618437 2335 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.149:6443/api/v1/nodes\": dial tcp 10.0.0.149:6443: connect: connection refused" node="localhost" Mar 10 02:07:10.708757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2318376215.mount: Deactivated successfully. Mar 10 02:07:10.716396 containerd[1559]: time="2026-03-10T02:07:10.716311639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:07:10.717397 containerd[1559]: time="2026-03-10T02:07:10.717129436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 10 02:07:10.720605 containerd[1559]: time="2026-03-10T02:07:10.720453500Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:07:10.723942 containerd[1559]: time="2026-03-10T02:07:10.723841716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:07:10.725656 containerd[1559]: time="2026-03-10T02:07:10.725572716Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:07:10.726243 containerd[1559]: time="2026-03-10T02:07:10.726168249Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 02:07:10.727172 containerd[1559]: time="2026-03-10T02:07:10.727113579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 02:07:10.728259 containerd[1559]: time="2026-03-10T02:07:10.728195922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 02:07:10.731214 containerd[1559]: time="2026-03-10T02:07:10.731150544Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 409.545957ms" Mar 10 02:07:10.731873 containerd[1559]: time="2026-03-10T02:07:10.731836237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 402.339338ms" Mar 10 02:07:10.740115 containerd[1559]: time="2026-03-10T02:07:10.739991766Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 414.618686ms" Mar 10 02:07:10.759674 containerd[1559]: time="2026-03-10T02:07:10.759589857Z" level=info msg="connecting to shim 68f83e38208107e28baecb13278eb3f0479473e99c91a2329e66a64e1e5e615f" address="unix:///run/containerd/s/c0eaad4819e9ce7186ae3925dae379f3f1eee9a1f5b8676bdebe479a5ab271d2" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:10.767391 containerd[1559]: time="2026-03-10T02:07:10.767314254Z" level=info msg="connecting to shim e49fe405f65757a54e504eaa56e6fb99476a4c1beb7cce6169e235077b93b542" address="unix:///run/containerd/s/525bf06d360c162ceaf7298aaf5c4f9f853ac0b8e1932b30c21c85c92b9d5c8d" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:10.775672 containerd[1559]: time="2026-03-10T02:07:10.775628142Z" level=info msg="connecting to shim 4dbe84eef84ad2dd6c1eac927e399c649c325a991146a078162a78f7c4c52e80" address="unix:///run/containerd/s/3008aa393928264f985926d2c9db444f712fbb1470499c46dc69a25c31be379c" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:10.793533 systemd[1]: Started cri-containerd-68f83e38208107e28baecb13278eb3f0479473e99c91a2329e66a64e1e5e615f.scope - libcontainer container 68f83e38208107e28baecb13278eb3f0479473e99c91a2329e66a64e1e5e615f. Mar 10 02:07:10.811291 systemd[1]: Started cri-containerd-4dbe84eef84ad2dd6c1eac927e399c649c325a991146a078162a78f7c4c52e80.scope - libcontainer container 4dbe84eef84ad2dd6c1eac927e399c649c325a991146a078162a78f7c4c52e80. Mar 10 02:07:10.814005 systemd[1]: Started cri-containerd-e49fe405f65757a54e504eaa56e6fb99476a4c1beb7cce6169e235077b93b542.scope - libcontainer container e49fe405f65757a54e504eaa56e6fb99476a4c1beb7cce6169e235077b93b542. Mar 10 02:07:10.865763 containerd[1559]: time="2026-03-10T02:07:10.865665944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:deac8424ef8d294886b1d3b592ae1e33,Namespace:kube-system,Attempt:0,} returns sandbox id \"68f83e38208107e28baecb13278eb3f0479473e99c91a2329e66a64e1e5e615f\"" Mar 10 02:07:10.868629 containerd[1559]: time="2026-03-10T02:07:10.868585874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dbe84eef84ad2dd6c1eac927e399c649c325a991146a078162a78f7c4c52e80\"" Mar 10 02:07:10.872508 containerd[1559]: time="2026-03-10T02:07:10.872397368Z" level=info msg="CreateContainer within sandbox \"68f83e38208107e28baecb13278eb3f0479473e99c91a2329e66a64e1e5e615f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 02:07:10.873902 containerd[1559]: time="2026-03-10T02:07:10.873812640Z" level=info msg="CreateContainer within sandbox \"4dbe84eef84ad2dd6c1eac927e399c649c325a991146a078162a78f7c4c52e80\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 02:07:10.882493 containerd[1559]: time="2026-03-10T02:07:10.882442678Z" level=info msg="Container a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:10.886620 containerd[1559]: time="2026-03-10T02:07:10.885856521Z" level=info msg="Container a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:10.894412 containerd[1559]: time="2026-03-10T02:07:10.894357638Z" level=info msg="CreateContainer within sandbox \"68f83e38208107e28baecb13278eb3f0479473e99c91a2329e66a64e1e5e615f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d\"" Mar 10 02:07:10.895239 containerd[1559]: time="2026-03-10T02:07:10.895215149Z" level=info msg="StartContainer for \"a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d\"" Mar 10 02:07:10.895512 containerd[1559]: time="2026-03-10T02:07:10.895446129Z" level=info msg="CreateContainer within sandbox \"4dbe84eef84ad2dd6c1eac927e399c649c325a991146a078162a78f7c4c52e80\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272\"" Mar 10 02:07:10.895512 containerd[1559]: time="2026-03-10T02:07:10.895505630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49fe405f65757a54e504eaa56e6fb99476a4c1beb7cce6169e235077b93b542\"" Mar 10 02:07:10.896016 containerd[1559]: time="2026-03-10T02:07:10.895963765Z" level=info msg="StartContainer for \"a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272\"" Mar 10 02:07:10.897352 containerd[1559]: time="2026-03-10T02:07:10.896887650Z" level=info msg="connecting to shim a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272" address="unix:///run/containerd/s/3008aa393928264f985926d2c9db444f712fbb1470499c46dc69a25c31be379c" protocol=ttrpc version=3 Mar 10 02:07:10.898959 containerd[1559]: time="2026-03-10T02:07:10.898935544Z" level=info msg="connecting to shim a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d" address="unix:///run/containerd/s/c0eaad4819e9ce7186ae3925dae379f3f1eee9a1f5b8676bdebe479a5ab271d2" protocol=ttrpc version=3 Mar 10 02:07:10.900381 containerd[1559]: time="2026-03-10T02:07:10.900361015Z" level=info msg="CreateContainer within sandbox \"e49fe405f65757a54e504eaa56e6fb99476a4c1beb7cce6169e235077b93b542\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 02:07:10.911902 containerd[1559]: time="2026-03-10T02:07:10.911879314Z" level=info msg="Container a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:10.919812 containerd[1559]: time="2026-03-10T02:07:10.919770882Z" level=info msg="CreateContainer within sandbox \"e49fe405f65757a54e504eaa56e6fb99476a4c1beb7cce6169e235077b93b542\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579\"" Mar 10 02:07:10.920228 systemd[1]: Started cri-containerd-a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272.scope - libcontainer container a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272. Mar 10 02:07:10.920380 containerd[1559]: time="2026-03-10T02:07:10.920265435Z" level=info msg="StartContainer for \"a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579\"" Mar 10 02:07:10.921927 containerd[1559]: time="2026-03-10T02:07:10.921841397Z" level=info msg="connecting to shim a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579" address="unix:///run/containerd/s/525bf06d360c162ceaf7298aaf5c4f9f853ac0b8e1932b30c21c85c92b9d5c8d" protocol=ttrpc version=3 Mar 10 02:07:10.924142 systemd[1]: Started cri-containerd-a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d.scope - libcontainer container a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d. Mar 10 02:07:10.950272 systemd[1]: Started cri-containerd-a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579.scope - libcontainer container a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579. Mar 10 02:07:10.983216 containerd[1559]: time="2026-03-10T02:07:10.982892601Z" level=info msg="StartContainer for \"a731665bbc06ff3339ff450e1449713be17cf8dc12452267835c6f8874784272\" returns successfully" Mar 10 02:07:11.010807 containerd[1559]: time="2026-03-10T02:07:11.010667862Z" level=info msg="StartContainer for \"a9aa6fa1eb719e2b9d4787377e04ad5d8840c8125d8413bcc4b919a52fccb97d\" returns successfully" Mar 10 02:07:11.018574 containerd[1559]: time="2026-03-10T02:07:11.018466768Z" level=info msg="StartContainer for \"a15826c11121185f3092ef4dc654f2ad5b6974d228111d1b9b4d1e623ca07579\" returns successfully" Mar 10 02:07:11.422691 kubelet[2335]: I0310 02:07:11.422639 2335 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 02:07:11.912964 kubelet[2335]: E0310 02:07:11.912903 2335 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:07:11.918516 kubelet[2335]: E0310 02:07:11.918471 2335 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:07:11.919661 kubelet[2335]: E0310 02:07:11.919641 2335 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 02:07:12.489036 kubelet[2335]: E0310 02:07:12.488952 2335 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 02:07:12.529559 kubelet[2335]: E0310 02:07:12.529479 2335 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189b58b8fdfc66bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 02:07:09.858825916 +0000 UTC m=+0.277823905,LastTimestamp:2026-03-10 02:07:09.858825916 +0000 UTC m=+0.277823905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 02:07:12.583648 kubelet[2335]: I0310 02:07:12.583581 2335 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 02:07:12.664602 kubelet[2335]: I0310 02:07:12.664534 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:12.669712 kubelet[2335]: E0310 02:07:12.669668 2335 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:12.669712 kubelet[2335]: I0310 02:07:12.669699 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:12.671257 kubelet[2335]: E0310 02:07:12.671221 2335 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:12.671257 kubelet[2335]: I0310 02:07:12.671245 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:12.672440 kubelet[2335]: E0310 02:07:12.672401 2335 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:12.855213 kubelet[2335]: I0310 02:07:12.855046 2335 apiserver.go:52] "Watching apiserver" Mar 10 02:07:12.865601 kubelet[2335]: I0310 02:07:12.865536 2335 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 02:07:12.919419 kubelet[2335]: I0310 02:07:12.918787 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:12.919419 kubelet[2335]: I0310 02:07:12.918909 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:12.919419 kubelet[2335]: I0310 02:07:12.919217 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:12.921029 kubelet[2335]: E0310 02:07:12.920890 2335 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:12.921029 kubelet[2335]: E0310 02:07:12.920912 2335 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:12.921329 kubelet[2335]: E0310 02:07:12.921309 2335 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:13.920814 kubelet[2335]: I0310 02:07:13.920769 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:13.921506 kubelet[2335]: I0310 02:07:13.921413 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:13.922034 kubelet[2335]: I0310 02:07:13.921266 2335 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:14.523665 systemd[1]: Reload requested from client PID 2629 ('systemctl') (unit session-7.scope)... Mar 10 02:07:14.523695 systemd[1]: Reloading... Mar 10 02:07:14.610145 zram_generator::config[2672]: No configuration found. Mar 10 02:07:14.849864 systemd[1]: Reloading finished in 325 ms. Mar 10 02:07:14.879163 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:07:14.896350 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 02:07:14.896653 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:14.896709 systemd[1]: kubelet.service: Consumed 726ms CPU time, 129.7M memory peak. Mar 10 02:07:14.898451 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 02:07:15.087928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 02:07:15.092030 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 02:07:15.134605 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 10 02:07:15.134605 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 02:07:15.134605 kubelet[2717]: I0310 02:07:15.134570 2717 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 10 02:07:15.142863 kubelet[2717]: I0310 02:07:15.142817 2717 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 10 02:07:15.142863 kubelet[2717]: I0310 02:07:15.142845 2717 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 02:07:15.142863 kubelet[2717]: I0310 02:07:15.142868 2717 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 02:07:15.142996 kubelet[2717]: I0310 02:07:15.142878 2717 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 02:07:15.143103 kubelet[2717]: I0310 02:07:15.143026 2717 server.go:956] "Client rotation is on, will bootstrap in background" Mar 10 02:07:15.143965 kubelet[2717]: I0310 02:07:15.143943 2717 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 02:07:15.145639 kubelet[2717]: I0310 02:07:15.145600 2717 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 02:07:15.149124 kubelet[2717]: I0310 02:07:15.149027 2717 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 02:07:15.156854 kubelet[2717]: I0310 02:07:15.156694 2717 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 02:07:15.156971 kubelet[2717]: I0310 02:07:15.156940 2717 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 02:07:15.157231 kubelet[2717]: I0310 02:07:15.156968 2717 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 02:07:15.157231 kubelet[2717]: I0310 02:07:15.157212 2717 topology_manager.go:138] "Creating topology manager with none policy" Mar 10 02:07:15.157231 kubelet[2717]: I0310 02:07:15.157221 2717 container_manager_linux.go:306] "Creating device plugin manager" Mar 10 02:07:15.157368 kubelet[2717]: I0310 02:07:15.157244 2717 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 02:07:15.157466 kubelet[2717]: I0310 02:07:15.157442 2717 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:07:15.157621 kubelet[2717]: I0310 02:07:15.157588 2717 kubelet.go:475] "Attempting to sync node with API server" Mar 10 02:07:15.157701 kubelet[2717]: I0310 02:07:15.157661 2717 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 02:07:15.158045 kubelet[2717]: I0310 02:07:15.158017 2717 kubelet.go:387] "Adding apiserver pod source" Mar 10 02:07:15.158140 kubelet[2717]: I0310 02:07:15.158050 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 02:07:15.159493 kubelet[2717]: I0310 02:07:15.159432 2717 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 02:07:15.159910 kubelet[2717]: I0310 02:07:15.159886 2717 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 02:07:15.159946 kubelet[2717]: I0310 02:07:15.159912 2717 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 02:07:15.163544 kubelet[2717]: I0310 02:07:15.163521 2717 server.go:1262] "Started kubelet" Mar 10 02:07:15.164185 kubelet[2717]: I0310 02:07:15.164145 2717 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 02:07:15.164401 kubelet[2717]: I0310 02:07:15.164365 2717 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 02:07:15.164486 kubelet[2717]: I0310 02:07:15.164407 2717 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 02:07:15.164689 kubelet[2717]: I0310 02:07:15.164584 2717 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 02:07:15.166900 kubelet[2717]: I0310 02:07:15.166873 2717 server.go:310] "Adding debug handlers to kubelet server" Mar 10 02:07:15.173404 kubelet[2717]: I0310 02:07:15.173244 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 10 02:07:15.175239 kubelet[2717]: E0310 02:07:15.175196 2717 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 02:07:15.175699 kubelet[2717]: I0310 02:07:15.175674 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 02:07:15.177766 kubelet[2717]: I0310 02:07:15.177752 2717 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 10 02:07:15.178339 kubelet[2717]: I0310 02:07:15.177869 2717 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 02:07:15.178339 kubelet[2717]: I0310 02:07:15.177978 2717 reconciler.go:29] "Reconciler: start to sync state" Mar 10 02:07:15.178909 kubelet[2717]: I0310 02:07:15.178889 2717 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 02:07:15.180519 kubelet[2717]: I0310 02:07:15.180504 2717 factory.go:223] Registration of the containerd container factory successfully Mar 10 02:07:15.180591 kubelet[2717]: I0310 02:07:15.180582 2717 factory.go:223] Registration of the systemd container factory successfully Mar 10 02:07:15.190816 kubelet[2717]: I0310 02:07:15.190776 2717 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 02:07:15.192494 kubelet[2717]: I0310 02:07:15.192477 2717 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 02:07:15.192560 kubelet[2717]: I0310 02:07:15.192551 2717 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 10 02:07:15.192617 kubelet[2717]: I0310 02:07:15.192610 2717 kubelet.go:2428] "Starting kubelet main sync loop" Mar 10 02:07:15.192695 kubelet[2717]: E0310 02:07:15.192675 2717 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 02:07:15.218458 kubelet[2717]: I0310 02:07:15.218437 2717 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 10 02:07:15.218573 kubelet[2717]: I0310 02:07:15.218561 2717 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 10 02:07:15.218675 kubelet[2717]: I0310 02:07:15.218666 2717 state_mem.go:36] "Initialized new in-memory state store" Mar 10 02:07:15.218984 kubelet[2717]: I0310 02:07:15.218971 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 10 02:07:15.219153 kubelet[2717]: I0310 02:07:15.219130 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 10 02:07:15.219257 kubelet[2717]: I0310 02:07:15.219248 2717 policy_none.go:49] "None policy: Start" Mar 10 02:07:15.219305 kubelet[2717]: I0310 02:07:15.219297 2717 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 02:07:15.219349 kubelet[2717]: I0310 02:07:15.219341 2717 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 02:07:15.219472 kubelet[2717]: I0310 02:07:15.219461 2717 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 02:07:15.219514 kubelet[2717]: I0310 02:07:15.219507 2717 policy_none.go:47] "Start" Mar 10 02:07:15.224237 kubelet[2717]: E0310 02:07:15.224163 2717 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 02:07:15.224418 kubelet[2717]: I0310 02:07:15.224355 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 10 02:07:15.224418 kubelet[2717]: I0310 02:07:15.224394 2717 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 02:07:15.224623 kubelet[2717]: I0310 02:07:15.224600 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 10 02:07:15.225844 kubelet[2717]: E0310 02:07:15.225828 2717 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 02:07:15.293486 kubelet[2717]: I0310 02:07:15.293400 2717 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:15.293703 kubelet[2717]: I0310 02:07:15.293584 2717 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:15.293828 kubelet[2717]: I0310 02:07:15.293766 2717 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:15.300580 kubelet[2717]: E0310 02:07:15.300553 2717 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:15.302534 kubelet[2717]: E0310 02:07:15.302466 2717 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:15.302778 kubelet[2717]: E0310 02:07:15.302602 2717 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:15.328007 kubelet[2717]: I0310 02:07:15.327840 2717 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 10 02:07:15.335570 kubelet[2717]: I0310 02:07:15.335500 2717 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 10 02:07:15.335570 kubelet[2717]: I0310 02:07:15.335559 2717 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 10 02:07:15.378873 kubelet[2717]: I0310 02:07:15.378800 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/deac8424ef8d294886b1d3b592ae1e33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"deac8424ef8d294886b1d3b592ae1e33\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:15.479789 kubelet[2717]: I0310 02:07:15.479737 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:15.479789 kubelet[2717]: I0310 02:07:15.479765 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:15.479789 kubelet[2717]: I0310 02:07:15.479782 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/deac8424ef8d294886b1d3b592ae1e33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"deac8424ef8d294886b1d3b592ae1e33\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:15.479971 kubelet[2717]: I0310 02:07:15.479795 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:15.479971 kubelet[2717]: I0310 02:07:15.479810 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:15.479971 kubelet[2717]: I0310 02:07:15.479830 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:15.479971 kubelet[2717]: I0310 02:07:15.479862 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/deac8424ef8d294886b1d3b592ae1e33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"deac8424ef8d294886b1d3b592ae1e33\") " pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:15.479971 kubelet[2717]: I0310 02:07:15.479877 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 02:07:16.158941 kubelet[2717]: I0310 02:07:16.158899 2717 apiserver.go:52] "Watching apiserver" Mar 10 02:07:16.178211 kubelet[2717]: I0310 02:07:16.178162 2717 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 02:07:16.210348 kubelet[2717]: I0310 02:07:16.209512 2717 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:16.210348 kubelet[2717]: I0310 02:07:16.210322 2717 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:16.218430 kubelet[2717]: E0310 02:07:16.218339 2717 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 10 02:07:16.218993 kubelet[2717]: E0310 02:07:16.218926 2717 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 10 02:07:16.252830 kubelet[2717]: I0310 02:07:16.252730 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.252714024 podStartE2EDuration="3.252714024s" podCreationTimestamp="2026-03-10 02:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:07:16.240729092 +0000 UTC m=+1.144963122" watchObservedRunningTime="2026-03-10 02:07:16.252714024 +0000 UTC m=+1.156948034" Mar 10 02:07:16.267881 kubelet[2717]: I0310 02:07:16.267732 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.267712278 podStartE2EDuration="3.267712278s" podCreationTimestamp="2026-03-10 02:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:07:16.254256213 +0000 UTC m=+1.158490233" watchObservedRunningTime="2026-03-10 02:07:16.267712278 +0000 UTC m=+1.171946298" Mar 10 02:07:16.278833 kubelet[2717]: I0310 02:07:16.278764 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.278749651 podStartE2EDuration="3.278749651s" podCreationTimestamp="2026-03-10 02:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:07:16.268571233 +0000 UTC m=+1.172805242" watchObservedRunningTime="2026-03-10 02:07:16.278749651 +0000 UTC m=+1.182983661" Mar 10 02:07:19.655853 kubelet[2717]: I0310 02:07:19.655790 2717 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 02:07:19.656490 containerd[1559]: time="2026-03-10T02:07:19.656138248Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 02:07:19.656846 kubelet[2717]: I0310 02:07:19.656593 2717 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 02:07:20.768395 systemd[1]: Created slice kubepods-besteffort-pod2d6018df_dc3c_4aab_aa9f_59bbb6501612.slice - libcontainer container kubepods-besteffort-pod2d6018df_dc3c_4aab_aa9f_59bbb6501612.slice. Mar 10 02:07:20.814872 kubelet[2717]: I0310 02:07:20.814822 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d6018df-dc3c-4aab-aa9f-59bbb6501612-xtables-lock\") pod \"kube-proxy-2nqgp\" (UID: \"2d6018df-dc3c-4aab-aa9f-59bbb6501612\") " pod="kube-system/kube-proxy-2nqgp" Mar 10 02:07:20.814872 kubelet[2717]: I0310 02:07:20.814852 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d6018df-dc3c-4aab-aa9f-59bbb6501612-lib-modules\") pod \"kube-proxy-2nqgp\" (UID: \"2d6018df-dc3c-4aab-aa9f-59bbb6501612\") " pod="kube-system/kube-proxy-2nqgp" Mar 10 02:07:20.814872 kubelet[2717]: I0310 02:07:20.814867 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blp8v\" (UniqueName: \"kubernetes.io/projected/2d6018df-dc3c-4aab-aa9f-59bbb6501612-kube-api-access-blp8v\") pod \"kube-proxy-2nqgp\" (UID: \"2d6018df-dc3c-4aab-aa9f-59bbb6501612\") " pod="kube-system/kube-proxy-2nqgp" Mar 10 02:07:20.815309 kubelet[2717]: I0310 02:07:20.814881 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d6018df-dc3c-4aab-aa9f-59bbb6501612-kube-proxy\") pod \"kube-proxy-2nqgp\" (UID: \"2d6018df-dc3c-4aab-aa9f-59bbb6501612\") " pod="kube-system/kube-proxy-2nqgp" Mar 10 02:07:20.870754 systemd[1]: Created slice kubepods-besteffort-pod51870c31_c66f_4dbb_abbc_0fb32b7716a8.slice - libcontainer container kubepods-besteffort-pod51870c31_c66f_4dbb_abbc_0fb32b7716a8.slice. Mar 10 02:07:20.915463 kubelet[2717]: I0310 02:07:20.915375 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjxsv\" (UniqueName: \"kubernetes.io/projected/51870c31-c66f-4dbb-abbc-0fb32b7716a8-kube-api-access-wjxsv\") pod \"tigera-operator-5588576f44-t82nx\" (UID: \"51870c31-c66f-4dbb-abbc-0fb32b7716a8\") " pod="tigera-operator/tigera-operator-5588576f44-t82nx" Mar 10 02:07:20.915463 kubelet[2717]: I0310 02:07:20.915433 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/51870c31-c66f-4dbb-abbc-0fb32b7716a8-var-lib-calico\") pod \"tigera-operator-5588576f44-t82nx\" (UID: \"51870c31-c66f-4dbb-abbc-0fb32b7716a8\") " pod="tigera-operator/tigera-operator-5588576f44-t82nx" Mar 10 02:07:21.086855 containerd[1559]: time="2026-03-10T02:07:21.086735741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nqgp,Uid:2d6018df-dc3c-4aab-aa9f-59bbb6501612,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:21.106432 containerd[1559]: time="2026-03-10T02:07:21.106223037Z" level=info msg="connecting to shim 67f818080b003804b57fd1e98d442163823aef42a2b932475bda523ddcc825e0" address="unix:///run/containerd/s/9c03a421370dc7d89891dfb51977f3cee0ecfdc722cd3cd0c62e9293e2ef09d9" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:21.134245 systemd[1]: Started cri-containerd-67f818080b003804b57fd1e98d442163823aef42a2b932475bda523ddcc825e0.scope - libcontainer container 67f818080b003804b57fd1e98d442163823aef42a2b932475bda523ddcc825e0. Mar 10 02:07:21.159892 containerd[1559]: time="2026-03-10T02:07:21.159854587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nqgp,Uid:2d6018df-dc3c-4aab-aa9f-59bbb6501612,Namespace:kube-system,Attempt:0,} returns sandbox id \"67f818080b003804b57fd1e98d442163823aef42a2b932475bda523ddcc825e0\"" Mar 10 02:07:21.165713 containerd[1559]: time="2026-03-10T02:07:21.165676408Z" level=info msg="CreateContainer within sandbox \"67f818080b003804b57fd1e98d442163823aef42a2b932475bda523ddcc825e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 02:07:21.177132 containerd[1559]: time="2026-03-10T02:07:21.176153299Z" level=info msg="Container e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:21.177132 containerd[1559]: time="2026-03-10T02:07:21.176963752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-t82nx,Uid:51870c31-c66f-4dbb-abbc-0fb32b7716a8,Namespace:tigera-operator,Attempt:0,}" Mar 10 02:07:21.188475 containerd[1559]: time="2026-03-10T02:07:21.188416770Z" level=info msg="CreateContainer within sandbox \"67f818080b003804b57fd1e98d442163823aef42a2b932475bda523ddcc825e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14\"" Mar 10 02:07:21.190003 containerd[1559]: time="2026-03-10T02:07:21.188924390Z" level=info msg="StartContainer for \"e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14\"" Mar 10 02:07:21.190669 containerd[1559]: time="2026-03-10T02:07:21.190601240Z" level=info msg="connecting to shim e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14" address="unix:///run/containerd/s/9c03a421370dc7d89891dfb51977f3cee0ecfdc722cd3cd0c62e9293e2ef09d9" protocol=ttrpc version=3 Mar 10 02:07:21.205442 containerd[1559]: time="2026-03-10T02:07:21.205393099Z" level=info msg="connecting to shim 8809da8d4858c01f52e9826624327b9881714767f2f8daf521273e097c385c2d" address="unix:///run/containerd/s/1e323356d26129a57d65208612b77d939f0ea98ae1c0fcfa04b0af78cbba31f0" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:21.216256 systemd[1]: Started cri-containerd-e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14.scope - libcontainer container e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14. Mar 10 02:07:21.239254 systemd[1]: Started cri-containerd-8809da8d4858c01f52e9826624327b9881714767f2f8daf521273e097c385c2d.scope - libcontainer container 8809da8d4858c01f52e9826624327b9881714767f2f8daf521273e097c385c2d. Mar 10 02:07:21.299136 containerd[1559]: time="2026-03-10T02:07:21.299007147Z" level=info msg="StartContainer for \"e49dec90441467ea3b4f344541ab0a0ae49e1406392f326c2594211888ee7c14\" returns successfully" Mar 10 02:07:21.303930 containerd[1559]: time="2026-03-10T02:07:21.303887844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-t82nx,Uid:51870c31-c66f-4dbb-abbc-0fb32b7716a8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8809da8d4858c01f52e9826624327b9881714767f2f8daf521273e097c385c2d\"" Mar 10 02:07:21.305984 containerd[1559]: time="2026-03-10T02:07:21.305959351Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 02:07:22.437975 kubelet[2717]: I0310 02:07:22.437903 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2nqgp" podStartSLOduration=2.437888486 podStartE2EDuration="2.437888486s" podCreationTimestamp="2026-03-10 02:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:07:22.234597718 +0000 UTC m=+7.138831728" watchObservedRunningTime="2026-03-10 02:07:22.437888486 +0000 UTC m=+7.342122486" Mar 10 02:07:23.656526 containerd[1559]: time="2026-03-10T02:07:23.656385536Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:23.657281 containerd[1559]: time="2026-03-10T02:07:23.657243482Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 10 02:07:23.658478 containerd[1559]: time="2026-03-10T02:07:23.658420389Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:23.660610 containerd[1559]: time="2026-03-10T02:07:23.660547011Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:23.661023 containerd[1559]: time="2026-03-10T02:07:23.660936054Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.354948642s" Mar 10 02:07:23.661023 containerd[1559]: time="2026-03-10T02:07:23.660974416Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 10 02:07:23.665723 containerd[1559]: time="2026-03-10T02:07:23.665699545Z" level=info msg="CreateContainer within sandbox \"8809da8d4858c01f52e9826624327b9881714767f2f8daf521273e097c385c2d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 02:07:23.673130 containerd[1559]: time="2026-03-10T02:07:23.673043285Z" level=info msg="Container dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:23.679891 containerd[1559]: time="2026-03-10T02:07:23.679838047Z" level=info msg="CreateContainer within sandbox \"8809da8d4858c01f52e9826624327b9881714767f2f8daf521273e097c385c2d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1\"" Mar 10 02:07:23.680379 containerd[1559]: time="2026-03-10T02:07:23.680303927Z" level=info msg="StartContainer for \"dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1\"" Mar 10 02:07:23.681152 containerd[1559]: time="2026-03-10T02:07:23.681004976Z" level=info msg="connecting to shim dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1" address="unix:///run/containerd/s/1e323356d26129a57d65208612b77d939f0ea98ae1c0fcfa04b0af78cbba31f0" protocol=ttrpc version=3 Mar 10 02:07:23.714218 systemd[1]: Started cri-containerd-dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1.scope - libcontainer container dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1. Mar 10 02:07:23.750564 containerd[1559]: time="2026-03-10T02:07:23.750373332Z" level=info msg="StartContainer for \"dfe448adf8a827f1014061d3a84f76615806974b54f798f68e4ffb63e745edf1\" returns successfully" Mar 10 02:07:25.414782 kubelet[2717]: I0310 02:07:25.414593 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-t82nx" podStartSLOduration=3.057943005 podStartE2EDuration="5.414580889s" podCreationTimestamp="2026-03-10 02:07:20 +0000 UTC" firstStartedPulling="2026-03-10 02:07:21.305463255 +0000 UTC m=+6.209697265" lastFinishedPulling="2026-03-10 02:07:23.662101139 +0000 UTC m=+8.566335149" observedRunningTime="2026-03-10 02:07:24.238293759 +0000 UTC m=+9.142527769" watchObservedRunningTime="2026-03-10 02:07:25.414580889 +0000 UTC m=+10.318814899" Mar 10 02:07:28.908576 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 10 02:07:28.911131 sshd[1766]: Connection closed by 10.0.0.1 port 56740 Mar 10 02:07:28.911403 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Mar 10 02:07:28.919260 systemd[1]: sshd@6-10.0.0.149:22-10.0.0.1:56740.service: Deactivated successfully. Mar 10 02:07:28.925676 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 02:07:28.925953 systemd[1]: session-7.scope: Consumed 5.441s CPU time, 228.2M memory peak. Mar 10 02:07:28.928973 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. Mar 10 02:07:28.931535 systemd-logind[1546]: Removed session 7. Mar 10 02:07:30.848961 systemd[1]: Created slice kubepods-besteffort-pod705f4664_979c_4109_88b0_0155a24fbf9c.slice - libcontainer container kubepods-besteffort-pod705f4664_979c_4109_88b0_0155a24fbf9c.slice. Mar 10 02:07:30.880553 kubelet[2717]: I0310 02:07:30.880491 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfv75\" (UniqueName: \"kubernetes.io/projected/705f4664-979c-4109-88b0-0155a24fbf9c-kube-api-access-pfv75\") pod \"calico-typha-75f8765f94-8c85w\" (UID: \"705f4664-979c-4109-88b0-0155a24fbf9c\") " pod="calico-system/calico-typha-75f8765f94-8c85w" Mar 10 02:07:30.880553 kubelet[2717]: I0310 02:07:30.880543 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705f4664-979c-4109-88b0-0155a24fbf9c-tigera-ca-bundle\") pod \"calico-typha-75f8765f94-8c85w\" (UID: \"705f4664-979c-4109-88b0-0155a24fbf9c\") " pod="calico-system/calico-typha-75f8765f94-8c85w" Mar 10 02:07:30.880553 kubelet[2717]: I0310 02:07:30.880558 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/705f4664-979c-4109-88b0-0155a24fbf9c-typha-certs\") pod \"calico-typha-75f8765f94-8c85w\" (UID: \"705f4664-979c-4109-88b0-0155a24fbf9c\") " pod="calico-system/calico-typha-75f8765f94-8c85w" Mar 10 02:07:30.900901 systemd[1]: Created slice kubepods-besteffort-podaab6d109_0d36_4973_97fd_d809195271c4.slice - libcontainer container kubepods-besteffort-podaab6d109_0d36_4973_97fd_d809195271c4.slice. Mar 10 02:07:30.981423 kubelet[2717]: I0310 02:07:30.981042 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-cni-net-dir\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981584 kubelet[2717]: I0310 02:07:30.981465 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-var-run-calico\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981584 kubelet[2717]: I0310 02:07:30.981485 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-xtables-lock\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981584 kubelet[2717]: I0310 02:07:30.981513 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-bpffs\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981584 kubelet[2717]: I0310 02:07:30.981528 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-sys-fs\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981584 kubelet[2717]: I0310 02:07:30.981540 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-policysync\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981584 kubelet[2717]: I0310 02:07:30.981553 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-nodeproc\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981724 kubelet[2717]: I0310 02:07:30.981581 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-lib-modules\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981724 kubelet[2717]: I0310 02:07:30.981593 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aab6d109-0d36-4973-97fd-d809195271c4-node-certs\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981724 kubelet[2717]: I0310 02:07:30.981607 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgh7b\" (UniqueName: \"kubernetes.io/projected/aab6d109-0d36-4973-97fd-d809195271c4-kube-api-access-kgh7b\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981724 kubelet[2717]: I0310 02:07:30.981621 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-flexvol-driver-host\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981724 kubelet[2717]: I0310 02:07:30.981635 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-var-lib-calico\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981832 kubelet[2717]: I0310 02:07:30.981648 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-cni-bin-dir\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981832 kubelet[2717]: I0310 02:07:30.981660 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aab6d109-0d36-4973-97fd-d809195271c4-tigera-ca-bundle\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:30.981832 kubelet[2717]: I0310 02:07:30.981674 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aab6d109-0d36-4973-97fd-d809195271c4-cni-log-dir\") pod \"calico-node-vhblz\" (UID: \"aab6d109-0d36-4973-97fd-d809195271c4\") " pod="calico-system/calico-node-vhblz" Mar 10 02:07:31.008446 kubelet[2717]: E0310 02:07:31.008286 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:31.082305 kubelet[2717]: I0310 02:07:31.082250 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f45461cc-b942-4f36-8ecd-98b10d5a677a-varrun\") pod \"csi-node-driver-9rxdl\" (UID: \"f45461cc-b942-4f36-8ecd-98b10d5a677a\") " pod="calico-system/csi-node-driver-9rxdl" Mar 10 02:07:31.082305 kubelet[2717]: I0310 02:07:31.082330 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5gng\" (UniqueName: \"kubernetes.io/projected/f45461cc-b942-4f36-8ecd-98b10d5a677a-kube-api-access-w5gng\") pod \"csi-node-driver-9rxdl\" (UID: \"f45461cc-b942-4f36-8ecd-98b10d5a677a\") " pod="calico-system/csi-node-driver-9rxdl" Mar 10 02:07:31.082522 kubelet[2717]: I0310 02:07:31.082365 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f45461cc-b942-4f36-8ecd-98b10d5a677a-kubelet-dir\") pod \"csi-node-driver-9rxdl\" (UID: \"f45461cc-b942-4f36-8ecd-98b10d5a677a\") " pod="calico-system/csi-node-driver-9rxdl" Mar 10 02:07:31.082522 kubelet[2717]: I0310 02:07:31.082404 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f45461cc-b942-4f36-8ecd-98b10d5a677a-registration-dir\") pod \"csi-node-driver-9rxdl\" (UID: \"f45461cc-b942-4f36-8ecd-98b10d5a677a\") " pod="calico-system/csi-node-driver-9rxdl" Mar 10 02:07:31.082522 kubelet[2717]: I0310 02:07:31.082434 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f45461cc-b942-4f36-8ecd-98b10d5a677a-socket-dir\") pod \"csi-node-driver-9rxdl\" (UID: \"f45461cc-b942-4f36-8ecd-98b10d5a677a\") " pod="calico-system/csi-node-driver-9rxdl" Mar 10 02:07:31.084335 kubelet[2717]: E0310 02:07:31.084283 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.084535 kubelet[2717]: W0310 02:07:31.084347 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.084535 kubelet[2717]: E0310 02:07:31.084364 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.084649 kubelet[2717]: E0310 02:07:31.084601 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.084649 kubelet[2717]: W0310 02:07:31.084609 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.084649 kubelet[2717]: E0310 02:07:31.084619 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.088300 kubelet[2717]: E0310 02:07:31.088281 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.089096 kubelet[2717]: W0310 02:07:31.088296 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.089096 kubelet[2717]: E0310 02:07:31.088370 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.092809 kubelet[2717]: E0310 02:07:31.092756 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.092809 kubelet[2717]: W0310 02:07:31.092791 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.092809 kubelet[2717]: E0310 02:07:31.092806 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.158637 containerd[1559]: time="2026-03-10T02:07:31.158553192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f8765f94-8c85w,Uid:705f4664-979c-4109-88b0-0155a24fbf9c,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:31.183717 kubelet[2717]: E0310 02:07:31.183680 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.183717 kubelet[2717]: W0310 02:07:31.183709 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.183841 kubelet[2717]: E0310 02:07:31.183728 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.184183 kubelet[2717]: E0310 02:07:31.184160 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.184183 kubelet[2717]: W0310 02:07:31.184179 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.184257 kubelet[2717]: E0310 02:07:31.184188 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.184491 kubelet[2717]: E0310 02:07:31.184468 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.184491 kubelet[2717]: W0310 02:07:31.184487 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.184547 kubelet[2717]: E0310 02:07:31.184495 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.184854 kubelet[2717]: E0310 02:07:31.184792 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.184854 kubelet[2717]: W0310 02:07:31.184820 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.184938 kubelet[2717]: E0310 02:07:31.184908 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.185242 kubelet[2717]: E0310 02:07:31.185222 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.185242 kubelet[2717]: W0310 02:07:31.185237 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.185307 kubelet[2717]: E0310 02:07:31.185248 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.185609 kubelet[2717]: E0310 02:07:31.185573 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.185609 kubelet[2717]: W0310 02:07:31.185603 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.185723 kubelet[2717]: E0310 02:07:31.185618 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.186149 kubelet[2717]: E0310 02:07:31.186120 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.186186 kubelet[2717]: W0310 02:07:31.186150 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.186186 kubelet[2717]: E0310 02:07:31.186163 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.186589 kubelet[2717]: E0310 02:07:31.186538 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.186589 kubelet[2717]: W0310 02:07:31.186571 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.186589 kubelet[2717]: E0310 02:07:31.186587 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.186981 kubelet[2717]: E0310 02:07:31.186953 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.187035 kubelet[2717]: W0310 02:07:31.186983 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.187035 kubelet[2717]: E0310 02:07:31.187001 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.187356 kubelet[2717]: E0310 02:07:31.187329 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.187356 kubelet[2717]: W0310 02:07:31.187346 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.187356 kubelet[2717]: E0310 02:07:31.187355 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.187645 kubelet[2717]: E0310 02:07:31.187599 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.187645 kubelet[2717]: W0310 02:07:31.187622 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.187645 kubelet[2717]: E0310 02:07:31.187630 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.187917 kubelet[2717]: E0310 02:07:31.187884 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.187917 kubelet[2717]: W0310 02:07:31.187902 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.187917 kubelet[2717]: E0310 02:07:31.187910 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.188239 kubelet[2717]: E0310 02:07:31.188206 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.188239 kubelet[2717]: W0310 02:07:31.188226 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.188239 kubelet[2717]: E0310 02:07:31.188236 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.188547 kubelet[2717]: E0310 02:07:31.188478 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.188547 kubelet[2717]: W0310 02:07:31.188516 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.188547 kubelet[2717]: E0310 02:07:31.188542 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.188949 kubelet[2717]: E0310 02:07:31.188911 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.188949 kubelet[2717]: W0310 02:07:31.188933 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.188949 kubelet[2717]: E0310 02:07:31.188943 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.189401 kubelet[2717]: E0310 02:07:31.189346 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.189401 kubelet[2717]: W0310 02:07:31.189367 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.189401 kubelet[2717]: E0310 02:07:31.189377 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.189679 kubelet[2717]: E0310 02:07:31.189641 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.189679 kubelet[2717]: W0310 02:07:31.189667 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.189679 kubelet[2717]: E0310 02:07:31.189676 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.190102 kubelet[2717]: E0310 02:07:31.189912 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.190102 kubelet[2717]: W0310 02:07:31.189932 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.190102 kubelet[2717]: E0310 02:07:31.189941 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.190274 kubelet[2717]: E0310 02:07:31.190211 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.190274 kubelet[2717]: W0310 02:07:31.190220 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.190274 kubelet[2717]: E0310 02:07:31.190228 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.190738 kubelet[2717]: E0310 02:07:31.190704 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.190738 kubelet[2717]: W0310 02:07:31.190728 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.190830 kubelet[2717]: E0310 02:07:31.190741 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.191166 kubelet[2717]: E0310 02:07:31.191132 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.191166 kubelet[2717]: W0310 02:07:31.191155 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.191166 kubelet[2717]: E0310 02:07:31.191167 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.191481 kubelet[2717]: E0310 02:07:31.191443 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.191481 kubelet[2717]: W0310 02:07:31.191468 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.191481 kubelet[2717]: E0310 02:07:31.191481 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.191793 kubelet[2717]: E0310 02:07:31.191753 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.191793 kubelet[2717]: W0310 02:07:31.191775 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.191793 kubelet[2717]: E0310 02:07:31.191788 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.192245 kubelet[2717]: E0310 02:07:31.192204 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.192245 kubelet[2717]: W0310 02:07:31.192232 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.192245 kubelet[2717]: E0310 02:07:31.192244 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.192584 kubelet[2717]: E0310 02:07:31.192549 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.192584 kubelet[2717]: W0310 02:07:31.192572 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.192584 kubelet[2717]: E0310 02:07:31.192584 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.201773 containerd[1559]: time="2026-03-10T02:07:31.201276250Z" level=info msg="connecting to shim a2bf5bd024f24025341611fd607c32bdf0a787bdcae1b9919d8767133fb0b54c" address="unix:///run/containerd/s/1a2ec864c078c83f398de322f34be4eefb11a52d00c31eeb0caf2b0a12be2e31" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:31.207915 kubelet[2717]: E0310 02:07:31.207852 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:31.207915 kubelet[2717]: W0310 02:07:31.207876 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:31.207915 kubelet[2717]: E0310 02:07:31.207896 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:31.211952 containerd[1559]: time="2026-03-10T02:07:31.211893471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vhblz,Uid:aab6d109-0d36-4973-97fd-d809195271c4,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:31.257846 containerd[1559]: time="2026-03-10T02:07:31.257786219Z" level=info msg="connecting to shim 0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63" address="unix:///run/containerd/s/0671ff8fde963ce3e7d528c8fa0f29491f8ba7b0611f5514ef6137573f708cc3" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:31.262363 systemd[1]: Started cri-containerd-a2bf5bd024f24025341611fd607c32bdf0a787bdcae1b9919d8767133fb0b54c.scope - libcontainer container a2bf5bd024f24025341611fd607c32bdf0a787bdcae1b9919d8767133fb0b54c. Mar 10 02:07:31.291218 systemd[1]: Started cri-containerd-0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63.scope - libcontainer container 0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63. Mar 10 02:07:31.318799 containerd[1559]: time="2026-03-10T02:07:31.318606056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75f8765f94-8c85w,Uid:705f4664-979c-4109-88b0-0155a24fbf9c,Namespace:calico-system,Attempt:0,} returns sandbox id \"a2bf5bd024f24025341611fd607c32bdf0a787bdcae1b9919d8767133fb0b54c\"" Mar 10 02:07:31.324400 containerd[1559]: time="2026-03-10T02:07:31.324300843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 02:07:31.334477 containerd[1559]: time="2026-03-10T02:07:31.334395278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vhblz,Uid:aab6d109-0d36-4973-97fd-d809195271c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\"" Mar 10 02:07:33.193794 kubelet[2717]: E0310 02:07:33.193722 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:33.528560 containerd[1559]: time="2026-03-10T02:07:33.528381914Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:33.529411 containerd[1559]: time="2026-03-10T02:07:33.529333548Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 10 02:07:33.530358 containerd[1559]: time="2026-03-10T02:07:33.530294923Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:33.532340 containerd[1559]: time="2026-03-10T02:07:33.532302242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:33.532913 containerd[1559]: time="2026-03-10T02:07:33.532884213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.208361468s" Mar 10 02:07:33.532951 containerd[1559]: time="2026-03-10T02:07:33.532917604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 10 02:07:33.534094 containerd[1559]: time="2026-03-10T02:07:33.533975485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 02:07:33.549775 containerd[1559]: time="2026-03-10T02:07:33.549689149Z" level=info msg="CreateContainer within sandbox \"a2bf5bd024f24025341611fd607c32bdf0a787bdcae1b9919d8767133fb0b54c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 02:07:33.558020 containerd[1559]: time="2026-03-10T02:07:33.557977374Z" level=info msg="Container 0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:33.567948 containerd[1559]: time="2026-03-10T02:07:33.567877313Z" level=info msg="CreateContainer within sandbox \"a2bf5bd024f24025341611fd607c32bdf0a787bdcae1b9919d8767133fb0b54c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b\"" Mar 10 02:07:33.569926 containerd[1559]: time="2026-03-10T02:07:33.569906099Z" level=info msg="StartContainer for \"0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b\"" Mar 10 02:07:33.571162 containerd[1559]: time="2026-03-10T02:07:33.570975394Z" level=info msg="connecting to shim 0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b" address="unix:///run/containerd/s/1a2ec864c078c83f398de322f34be4eefb11a52d00c31eeb0caf2b0a12be2e31" protocol=ttrpc version=3 Mar 10 02:07:33.600306 systemd[1]: Started cri-containerd-0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b.scope - libcontainer container 0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b. Mar 10 02:07:33.653646 containerd[1559]: time="2026-03-10T02:07:33.653587346Z" level=info msg="StartContainer for \"0be42521535773407782ea8755945a80600004618f2a21cd84e09fe91dc9c16b\" returns successfully" Mar 10 02:07:34.266070 kubelet[2717]: I0310 02:07:34.265978 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75f8765f94-8c85w" podStartSLOduration=2.054115252 podStartE2EDuration="4.265964852s" podCreationTimestamp="2026-03-10 02:07:30 +0000 UTC" firstStartedPulling="2026-03-10 02:07:31.321992127 +0000 UTC m=+16.226226137" lastFinishedPulling="2026-03-10 02:07:33.533841567 +0000 UTC m=+18.438075737" observedRunningTime="2026-03-10 02:07:34.265560017 +0000 UTC m=+19.169794037" watchObservedRunningTime="2026-03-10 02:07:34.265964852 +0000 UTC m=+19.170198862" Mar 10 02:07:34.296922 kubelet[2717]: E0310 02:07:34.296837 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.296922 kubelet[2717]: W0310 02:07:34.296885 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.296922 kubelet[2717]: E0310 02:07:34.296915 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.297486 kubelet[2717]: E0310 02:07:34.297431 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.297486 kubelet[2717]: W0310 02:07:34.297460 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.297486 kubelet[2717]: E0310 02:07:34.297477 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.297868 kubelet[2717]: E0310 02:07:34.297814 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.297868 kubelet[2717]: W0310 02:07:34.297846 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.297868 kubelet[2717]: E0310 02:07:34.297860 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.298324 kubelet[2717]: E0310 02:07:34.298275 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.298324 kubelet[2717]: W0310 02:07:34.298302 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.298324 kubelet[2717]: E0310 02:07:34.298316 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.298685 kubelet[2717]: E0310 02:07:34.298631 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.298685 kubelet[2717]: W0310 02:07:34.298661 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.298685 kubelet[2717]: E0310 02:07:34.298674 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.298989 kubelet[2717]: E0310 02:07:34.298947 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.298989 kubelet[2717]: W0310 02:07:34.298973 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.298989 kubelet[2717]: E0310 02:07:34.298984 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.299355 kubelet[2717]: E0310 02:07:34.299332 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.299355 kubelet[2717]: W0310 02:07:34.299353 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.299399 kubelet[2717]: E0310 02:07:34.299364 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.299715 kubelet[2717]: E0310 02:07:34.299694 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.299743 kubelet[2717]: W0310 02:07:34.299715 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.299743 kubelet[2717]: E0310 02:07:34.299729 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.300132 kubelet[2717]: E0310 02:07:34.300028 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.300132 kubelet[2717]: W0310 02:07:34.300121 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.300132 kubelet[2717]: E0310 02:07:34.300134 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.300507 kubelet[2717]: E0310 02:07:34.300455 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.300507 kubelet[2717]: W0310 02:07:34.300481 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.300507 kubelet[2717]: E0310 02:07:34.300492 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.300849 kubelet[2717]: E0310 02:07:34.300788 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.300849 kubelet[2717]: W0310 02:07:34.300818 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.300849 kubelet[2717]: E0310 02:07:34.300835 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.301224 kubelet[2717]: E0310 02:07:34.301204 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.301224 kubelet[2717]: W0310 02:07:34.301221 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.301283 kubelet[2717]: E0310 02:07:34.301230 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.301504 kubelet[2717]: E0310 02:07:34.301485 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.301504 kubelet[2717]: W0310 02:07:34.301500 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.301550 kubelet[2717]: E0310 02:07:34.301508 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.301818 kubelet[2717]: E0310 02:07:34.301804 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.301818 kubelet[2717]: W0310 02:07:34.301815 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.301870 kubelet[2717]: E0310 02:07:34.301822 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.302186 kubelet[2717]: E0310 02:07:34.302164 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.302186 kubelet[2717]: W0310 02:07:34.302180 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.302229 kubelet[2717]: E0310 02:07:34.302189 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.312644 kubelet[2717]: E0310 02:07:34.312546 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.312644 kubelet[2717]: W0310 02:07:34.312572 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.312644 kubelet[2717]: E0310 02:07:34.312587 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.312976 kubelet[2717]: E0310 02:07:34.312945 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.312976 kubelet[2717]: W0310 02:07:34.312963 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.312976 kubelet[2717]: E0310 02:07:34.312971 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.313367 kubelet[2717]: E0310 02:07:34.313322 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.313367 kubelet[2717]: W0310 02:07:34.313345 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.313367 kubelet[2717]: E0310 02:07:34.313353 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.313699 kubelet[2717]: E0310 02:07:34.313655 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.313699 kubelet[2717]: W0310 02:07:34.313689 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.313699 kubelet[2717]: E0310 02:07:34.313698 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.314052 kubelet[2717]: E0310 02:07:34.314009 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.314052 kubelet[2717]: W0310 02:07:34.314053 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.314052 kubelet[2717]: E0310 02:07:34.314090 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.314388 kubelet[2717]: E0310 02:07:34.314344 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.314388 kubelet[2717]: W0310 02:07:34.314366 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.314388 kubelet[2717]: E0310 02:07:34.314374 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.314645 kubelet[2717]: E0310 02:07:34.314622 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.314645 kubelet[2717]: W0310 02:07:34.314638 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.314694 kubelet[2717]: E0310 02:07:34.314648 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.314941 kubelet[2717]: E0310 02:07:34.314896 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.314941 kubelet[2717]: W0310 02:07:34.314919 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.314941 kubelet[2717]: E0310 02:07:34.314927 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.315258 kubelet[2717]: E0310 02:07:34.315238 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.315288 kubelet[2717]: W0310 02:07:34.315259 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.315288 kubelet[2717]: E0310 02:07:34.315271 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.315732 kubelet[2717]: E0310 02:07:34.315701 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.315764 kubelet[2717]: W0310 02:07:34.315733 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.315764 kubelet[2717]: E0310 02:07:34.315749 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.316153 kubelet[2717]: E0310 02:07:34.316125 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.316193 kubelet[2717]: W0310 02:07:34.316153 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.316193 kubelet[2717]: E0310 02:07:34.316168 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.316507 kubelet[2717]: E0310 02:07:34.316454 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.316507 kubelet[2717]: W0310 02:07:34.316489 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.316507 kubelet[2717]: E0310 02:07:34.316503 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.316859 kubelet[2717]: E0310 02:07:34.316809 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.316859 kubelet[2717]: W0310 02:07:34.316843 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.316859 kubelet[2717]: E0310 02:07:34.316857 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.317270 kubelet[2717]: E0310 02:07:34.317223 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.317270 kubelet[2717]: W0310 02:07:34.317257 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.317270 kubelet[2717]: E0310 02:07:34.317270 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.317715 kubelet[2717]: E0310 02:07:34.317628 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.317715 kubelet[2717]: W0310 02:07:34.317649 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.317715 kubelet[2717]: E0310 02:07:34.317658 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.317981 kubelet[2717]: E0310 02:07:34.317960 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.317981 kubelet[2717]: W0310 02:07:34.317976 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.318084 kubelet[2717]: E0310 02:07:34.317984 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.318386 kubelet[2717]: E0310 02:07:34.318362 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.318386 kubelet[2717]: W0310 02:07:34.318379 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.318474 kubelet[2717]: E0310 02:07:34.318389 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.318677 kubelet[2717]: E0310 02:07:34.318655 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 02:07:34.318677 kubelet[2717]: W0310 02:07:34.318672 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 02:07:34.318715 kubelet[2717]: E0310 02:07:34.318681 2717 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 02:07:34.725097 containerd[1559]: time="2026-03-10T02:07:34.724996380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:34.725895 containerd[1559]: time="2026-03-10T02:07:34.725839567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 10 02:07:34.727299 containerd[1559]: time="2026-03-10T02:07:34.727243302Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:34.729527 containerd[1559]: time="2026-03-10T02:07:34.729467534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:34.730239 containerd[1559]: time="2026-03-10T02:07:34.730185478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.196182952s" Mar 10 02:07:34.730239 containerd[1559]: time="2026-03-10T02:07:34.730226564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 10 02:07:34.734832 containerd[1559]: time="2026-03-10T02:07:34.734756061Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 02:07:34.744291 containerd[1559]: time="2026-03-10T02:07:34.744226212Z" level=info msg="Container be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:34.751924 containerd[1559]: time="2026-03-10T02:07:34.751873443Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d\"" Mar 10 02:07:34.752569 containerd[1559]: time="2026-03-10T02:07:34.752533789Z" level=info msg="StartContainer for \"be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d\"" Mar 10 02:07:34.753690 containerd[1559]: time="2026-03-10T02:07:34.753647869Z" level=info msg="connecting to shim be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d" address="unix:///run/containerd/s/0671ff8fde963ce3e7d528c8fa0f29491f8ba7b0611f5514ef6137573f708cc3" protocol=ttrpc version=3 Mar 10 02:07:34.787303 systemd[1]: Started cri-containerd-be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d.scope - libcontainer container be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d. Mar 10 02:07:34.871729 containerd[1559]: time="2026-03-10T02:07:34.871678085Z" level=info msg="StartContainer for \"be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d\" returns successfully" Mar 10 02:07:34.886765 systemd[1]: cri-containerd-be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d.scope: Deactivated successfully. Mar 10 02:07:34.890440 containerd[1559]: time="2026-03-10T02:07:34.890308870Z" level=info msg="received container exit event container_id:\"be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d\" id:\"be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d\" pid:3364 exited_at:{seconds:1773108454 nanos:889761564}" Mar 10 02:07:34.918000 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be172d2197307fe9e817771b07ac9bbaebe3d5e46e07e34a560b42176c56409d-rootfs.mount: Deactivated successfully. Mar 10 02:07:35.193870 kubelet[2717]: E0310 02:07:35.193748 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:35.257211 containerd[1559]: time="2026-03-10T02:07:35.257051796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 02:07:36.856356 update_engine[1551]: I20260310 02:07:36.856226 1551 update_attempter.cc:509] Updating boot flags... Mar 10 02:07:37.193885 kubelet[2717]: E0310 02:07:37.193769 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:39.197093 kubelet[2717]: E0310 02:07:39.196693 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:39.341284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2377537749.mount: Deactivated successfully. Mar 10 02:07:39.570380 containerd[1559]: time="2026-03-10T02:07:39.570259438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 10 02:07:39.572923 containerd[1559]: time="2026-03-10T02:07:39.572886876Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:39.573516 containerd[1559]: time="2026-03-10T02:07:39.573470420Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:39.594782 containerd[1559]: time="2026-03-10T02:07:39.594733259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:39.595252 containerd[1559]: time="2026-03-10T02:07:39.595202973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4.33806701s" Mar 10 02:07:39.595252 containerd[1559]: time="2026-03-10T02:07:39.595235414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 10 02:07:39.602473 containerd[1559]: time="2026-03-10T02:07:39.602429754Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 02:07:39.613341 containerd[1559]: time="2026-03-10T02:07:39.613278088Z" level=info msg="Container d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:39.631114 containerd[1559]: time="2026-03-10T02:07:39.631082829Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7\"" Mar 10 02:07:39.631565 containerd[1559]: time="2026-03-10T02:07:39.631494033Z" level=info msg="StartContainer for \"d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7\"" Mar 10 02:07:39.632879 containerd[1559]: time="2026-03-10T02:07:39.632842078Z" level=info msg="connecting to shim d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7" address="unix:///run/containerd/s/0671ff8fde963ce3e7d528c8fa0f29491f8ba7b0611f5514ef6137573f708cc3" protocol=ttrpc version=3 Mar 10 02:07:39.657220 systemd[1]: Started cri-containerd-d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7.scope - libcontainer container d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7. Mar 10 02:07:39.746613 containerd[1559]: time="2026-03-10T02:07:39.746550450Z" level=info msg="StartContainer for \"d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7\" returns successfully" Mar 10 02:07:39.774872 systemd[1]: cri-containerd-d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7.scope: Deactivated successfully. Mar 10 02:07:39.784256 containerd[1559]: time="2026-03-10T02:07:39.784198337Z" level=info msg="received container exit event container_id:\"d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7\" id:\"d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7\" pid:3439 exited_at:{seconds:1773108459 nanos:776190885}" Mar 10 02:07:40.271012 containerd[1559]: time="2026-03-10T02:07:40.270851138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 02:07:40.342147 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d853e586267ec3e5d7c166476f17051f81ee9eebf4defb59333f1856f5d335a7-rootfs.mount: Deactivated successfully. Mar 10 02:07:41.193219 kubelet[2717]: E0310 02:07:41.193150 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:42.807664 containerd[1559]: time="2026-03-10T02:07:42.807596955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:42.808231 containerd[1559]: time="2026-03-10T02:07:42.808188271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 10 02:07:42.809909 containerd[1559]: time="2026-03-10T02:07:42.809178959Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:42.813607 containerd[1559]: time="2026-03-10T02:07:42.813521737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:42.814487 containerd[1559]: time="2026-03-10T02:07:42.814442600Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.54348801s" Mar 10 02:07:42.814487 containerd[1559]: time="2026-03-10T02:07:42.814476894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 10 02:07:42.818603 containerd[1559]: time="2026-03-10T02:07:42.818535796Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 02:07:42.828160 containerd[1559]: time="2026-03-10T02:07:42.828119986Z" level=info msg="Container ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:42.838511 containerd[1559]: time="2026-03-10T02:07:42.838473603Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b\"" Mar 10 02:07:42.838863 containerd[1559]: time="2026-03-10T02:07:42.838841235Z" level=info msg="StartContainer for \"ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b\"" Mar 10 02:07:42.846477 containerd[1559]: time="2026-03-10T02:07:42.846409860Z" level=info msg="connecting to shim ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b" address="unix:///run/containerd/s/0671ff8fde963ce3e7d528c8fa0f29491f8ba7b0611f5514ef6137573f708cc3" protocol=ttrpc version=3 Mar 10 02:07:42.874232 systemd[1]: Started cri-containerd-ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b.scope - libcontainer container ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b. Mar 10 02:07:42.963710 containerd[1559]: time="2026-03-10T02:07:42.963649124Z" level=info msg="StartContainer for \"ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b\" returns successfully" Mar 10 02:07:43.193808 kubelet[2717]: E0310 02:07:43.193755 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9rxdl" podUID="f45461cc-b942-4f36-8ecd-98b10d5a677a" Mar 10 02:07:43.622688 systemd[1]: cri-containerd-ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b.scope: Deactivated successfully. Mar 10 02:07:43.623283 systemd[1]: cri-containerd-ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b.scope: Consumed 590ms CPU time, 182.2M memory peak, 4.8M read from disk, 177M written to disk. Mar 10 02:07:43.623955 containerd[1559]: time="2026-03-10T02:07:43.623705411Z" level=info msg="received container exit event container_id:\"ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b\" id:\"ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b\" pid:3498 exited_at:{seconds:1773108463 nanos:623378102}" Mar 10 02:07:43.640678 kubelet[2717]: I0310 02:07:43.640629 2717 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 10 02:07:43.649964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba59bfdf8e982671a644ab06987637313c79275eb40cfd4d64259e70dd118a9b-rootfs.mount: Deactivated successfully. Mar 10 02:07:43.694411 systemd[1]: Created slice kubepods-burstable-podc09a44c4_f571_4334_8394_76c9c0007ea5.slice - libcontainer container kubepods-burstable-podc09a44c4_f571_4334_8394_76c9c0007ea5.slice. Mar 10 02:07:43.702580 systemd[1]: Created slice kubepods-besteffort-pod0277ffd5_4e59_4cdc_87b4_bb70cff55e14.slice - libcontainer container kubepods-besteffort-pod0277ffd5_4e59_4cdc_87b4_bb70cff55e14.slice. Mar 10 02:07:43.712466 systemd[1]: Created slice kubepods-burstable-pod95ea2f0e_a57c_4442_a6b9_ffb3fce130b7.slice - libcontainer container kubepods-burstable-pod95ea2f0e_a57c_4442_a6b9_ffb3fce130b7.slice. Mar 10 02:07:43.720825 systemd[1]: Created slice kubepods-besteffort-podc6552e7b_3fb9_4a05_88cd_e37203f5405d.slice - libcontainer container kubepods-besteffort-podc6552e7b_3fb9_4a05_88cd_e37203f5405d.slice. Mar 10 02:07:43.726968 systemd[1]: Created slice kubepods-besteffort-pod3804cfdb_f007_453a_9310_c14ad48665c1.slice - libcontainer container kubepods-besteffort-pod3804cfdb_f007_453a_9310_c14ad48665c1.slice. Mar 10 02:07:43.733249 systemd[1]: Created slice kubepods-besteffort-pod60484490_d829_4e39_bbde_8dc1e78cf934.slice - libcontainer container kubepods-besteffort-pod60484490_d829_4e39_bbde_8dc1e78cf934.slice. Mar 10 02:07:43.740295 systemd[1]: Created slice kubepods-besteffort-podf4410bc0_61db_48a1_8215_d72b639881fc.slice - libcontainer container kubepods-besteffort-podf4410bc0_61db_48a1_8215_d72b639881fc.slice. Mar 10 02:07:43.779929 kubelet[2717]: I0310 02:07:43.779845 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vng5h\" (UniqueName: \"kubernetes.io/projected/3804cfdb-f007-453a-9310-c14ad48665c1-kube-api-access-vng5h\") pod \"calico-apiserver-6767cf97dd-s4z2d\" (UID: \"3804cfdb-f007-453a-9310-c14ad48665c1\") " pod="calico-system/calico-apiserver-6767cf97dd-s4z2d" Mar 10 02:07:43.779929 kubelet[2717]: I0310 02:07:43.779895 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krf96\" (UniqueName: \"kubernetes.io/projected/0277ffd5-4e59-4cdc-87b4-bb70cff55e14-kube-api-access-krf96\") pod \"calico-apiserver-6767cf97dd-rcch5\" (UID: \"0277ffd5-4e59-4cdc-87b4-bb70cff55e14\") " pod="calico-system/calico-apiserver-6767cf97dd-rcch5" Mar 10 02:07:43.779929 kubelet[2717]: I0310 02:07:43.779913 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95ea2f0e-a57c-4442-a6b9-ffb3fce130b7-config-volume\") pod \"coredns-66bc5c9577-dd9lc\" (UID: \"95ea2f0e-a57c-4442-a6b9-ffb3fce130b7\") " pod="kube-system/coredns-66bc5c9577-dd9lc" Mar 10 02:07:43.779929 kubelet[2717]: I0310 02:07:43.779929 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-nginx-config\") pod \"whisker-797bcf65fd-ngdnt\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " pod="calico-system/whisker-797bcf65fd-ngdnt" Mar 10 02:07:43.780249 kubelet[2717]: I0310 02:07:43.779943 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5ghm\" (UniqueName: \"kubernetes.io/projected/c09a44c4-f571-4334-8394-76c9c0007ea5-kube-api-access-x5ghm\") pod \"coredns-66bc5c9577-rqpzt\" (UID: \"c09a44c4-f571-4334-8394-76c9c0007ea5\") " pod="kube-system/coredns-66bc5c9577-rqpzt" Mar 10 02:07:43.780249 kubelet[2717]: I0310 02:07:43.779962 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6552e7b-3fb9-4a05-88cd-e37203f5405d-tigera-ca-bundle\") pod \"calico-kube-controllers-589bb4f5c4-8wzb9\" (UID: \"c6552e7b-3fb9-4a05-88cd-e37203f5405d\") " pod="calico-system/calico-kube-controllers-589bb4f5c4-8wzb9" Mar 10 02:07:43.780249 kubelet[2717]: I0310 02:07:43.779979 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3804cfdb-f007-453a-9310-c14ad48665c1-calico-apiserver-certs\") pod \"calico-apiserver-6767cf97dd-s4z2d\" (UID: \"3804cfdb-f007-453a-9310-c14ad48665c1\") " pod="calico-system/calico-apiserver-6767cf97dd-s4z2d" Mar 10 02:07:43.780249 kubelet[2717]: I0310 02:07:43.779994 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-backend-key-pair\") pod \"whisker-797bcf65fd-ngdnt\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " pod="calico-system/whisker-797bcf65fd-ngdnt" Mar 10 02:07:43.780249 kubelet[2717]: I0310 02:07:43.780010 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0277ffd5-4e59-4cdc-87b4-bb70cff55e14-calico-apiserver-certs\") pod \"calico-apiserver-6767cf97dd-rcch5\" (UID: \"0277ffd5-4e59-4cdc-87b4-bb70cff55e14\") " pod="calico-system/calico-apiserver-6767cf97dd-rcch5" Mar 10 02:07:43.780362 kubelet[2717]: I0310 02:07:43.780050 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-ca-bundle\") pod \"whisker-797bcf65fd-ngdnt\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " pod="calico-system/whisker-797bcf65fd-ngdnt" Mar 10 02:07:43.780362 kubelet[2717]: I0310 02:07:43.780163 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g92n\" (UniqueName: \"kubernetes.io/projected/95ea2f0e-a57c-4442-a6b9-ffb3fce130b7-kube-api-access-2g92n\") pod \"coredns-66bc5c9577-dd9lc\" (UID: \"95ea2f0e-a57c-4442-a6b9-ffb3fce130b7\") " pod="kube-system/coredns-66bc5c9577-dd9lc" Mar 10 02:07:43.780362 kubelet[2717]: I0310 02:07:43.780195 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4410bc0-61db-48a1-8215-d72b639881fc-config\") pod \"goldmane-cccfbd5cf-c9bgq\" (UID: \"f4410bc0-61db-48a1-8215-d72b639881fc\") " pod="calico-system/goldmane-cccfbd5cf-c9bgq" Mar 10 02:07:43.780362 kubelet[2717]: I0310 02:07:43.780248 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4410bc0-61db-48a1-8215-d72b639881fc-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-c9bgq\" (UID: \"f4410bc0-61db-48a1-8215-d72b639881fc\") " pod="calico-system/goldmane-cccfbd5cf-c9bgq" Mar 10 02:07:43.780362 kubelet[2717]: I0310 02:07:43.780265 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f4410bc0-61db-48a1-8215-d72b639881fc-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-c9bgq\" (UID: \"f4410bc0-61db-48a1-8215-d72b639881fc\") " pod="calico-system/goldmane-cccfbd5cf-c9bgq" Mar 10 02:07:43.780475 kubelet[2717]: I0310 02:07:43.780280 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c09a44c4-f571-4334-8394-76c9c0007ea5-config-volume\") pod \"coredns-66bc5c9577-rqpzt\" (UID: \"c09a44c4-f571-4334-8394-76c9c0007ea5\") " pod="kube-system/coredns-66bc5c9577-rqpzt" Mar 10 02:07:43.780475 kubelet[2717]: I0310 02:07:43.780295 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jx5kx\" (UniqueName: \"kubernetes.io/projected/f4410bc0-61db-48a1-8215-d72b639881fc-kube-api-access-jx5kx\") pod \"goldmane-cccfbd5cf-c9bgq\" (UID: \"f4410bc0-61db-48a1-8215-d72b639881fc\") " pod="calico-system/goldmane-cccfbd5cf-c9bgq" Mar 10 02:07:43.780475 kubelet[2717]: I0310 02:07:43.780310 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsmhs\" (UniqueName: \"kubernetes.io/projected/c6552e7b-3fb9-4a05-88cd-e37203f5405d-kube-api-access-vsmhs\") pod \"calico-kube-controllers-589bb4f5c4-8wzb9\" (UID: \"c6552e7b-3fb9-4a05-88cd-e37203f5405d\") " pod="calico-system/calico-kube-controllers-589bb4f5c4-8wzb9" Mar 10 02:07:43.780475 kubelet[2717]: I0310 02:07:43.780324 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb5g2\" (UniqueName: \"kubernetes.io/projected/60484490-d829-4e39-bbde-8dc1e78cf934-kube-api-access-hb5g2\") pod \"whisker-797bcf65fd-ngdnt\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " pod="calico-system/whisker-797bcf65fd-ngdnt" Mar 10 02:07:44.003619 containerd[1559]: time="2026-03-10T02:07:44.003584418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rqpzt,Uid:c09a44c4-f571-4334-8394-76c9c0007ea5,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:44.011102 containerd[1559]: time="2026-03-10T02:07:44.010997175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-rcch5,Uid:0277ffd5-4e59-4cdc-87b4-bb70cff55e14,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:44.021909 containerd[1559]: time="2026-03-10T02:07:44.021887198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dd9lc,Uid:95ea2f0e-a57c-4442-a6b9-ffb3fce130b7,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:44.030737 containerd[1559]: time="2026-03-10T02:07:44.030715463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589bb4f5c4-8wzb9,Uid:c6552e7b-3fb9-4a05-88cd-e37203f5405d,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:44.045040 containerd[1559]: time="2026-03-10T02:07:44.044511583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-797bcf65fd-ngdnt,Uid:60484490-d829-4e39-bbde-8dc1e78cf934,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:44.045303 containerd[1559]: time="2026-03-10T02:07:44.045283872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-s4z2d,Uid:3804cfdb-f007-453a-9310-c14ad48665c1,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:44.054422 containerd[1559]: time="2026-03-10T02:07:44.054277572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-c9bgq,Uid:f4410bc0-61db-48a1-8215-d72b639881fc,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:44.174655 containerd[1559]: time="2026-03-10T02:07:44.174585278Z" level=error msg="Failed to destroy network for sandbox \"f1abd908b7f81c167e0200df93ed98969540bac97aff2d56e1b8cf9b0fcaf3cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.177144 containerd[1559]: time="2026-03-10T02:07:44.177053491Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-797bcf65fd-ngdnt,Uid:60484490-d829-4e39-bbde-8dc1e78cf934,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1abd908b7f81c167e0200df93ed98969540bac97aff2d56e1b8cf9b0fcaf3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.179425 containerd[1559]: time="2026-03-10T02:07:44.179360269Z" level=error msg="Failed to destroy network for sandbox \"9e9af7a876746b12e4115dd9f87d53f22ac816082e02264699a542040bd929bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.180322 containerd[1559]: time="2026-03-10T02:07:44.180234517Z" level=error msg="Failed to destroy network for sandbox \"db5bec18b30fb9f9aafecc12de10af19d68e1e771114577b9e8caa06b5ed09f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.181610 kubelet[2717]: E0310 02:07:44.181570 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1abd908b7f81c167e0200df93ed98969540bac97aff2d56e1b8cf9b0fcaf3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.181749 kubelet[2717]: E0310 02:07:44.181629 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1abd908b7f81c167e0200df93ed98969540bac97aff2d56e1b8cf9b0fcaf3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-797bcf65fd-ngdnt" Mar 10 02:07:44.181749 kubelet[2717]: E0310 02:07:44.181647 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1abd908b7f81c167e0200df93ed98969540bac97aff2d56e1b8cf9b0fcaf3cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-797bcf65fd-ngdnt" Mar 10 02:07:44.181749 kubelet[2717]: E0310 02:07:44.181711 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-797bcf65fd-ngdnt_calico-system(60484490-d829-4e39-bbde-8dc1e78cf934)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-797bcf65fd-ngdnt_calico-system(60484490-d829-4e39-bbde-8dc1e78cf934)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1abd908b7f81c167e0200df93ed98969540bac97aff2d56e1b8cf9b0fcaf3cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-797bcf65fd-ngdnt" podUID="60484490-d829-4e39-bbde-8dc1e78cf934" Mar 10 02:07:44.183634 containerd[1559]: time="2026-03-10T02:07:44.183535363Z" level=error msg="Failed to destroy network for sandbox \"d788c0b1a33ae00ced03998befb511fb127786a9e2f74ed0ea2c2fe6957f47ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.183717 containerd[1559]: time="2026-03-10T02:07:44.183535762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-rcch5,Uid:0277ffd5-4e59-4cdc-87b4-bb70cff55e14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e9af7a876746b12e4115dd9f87d53f22ac816082e02264699a542040bd929bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.183958 kubelet[2717]: E0310 02:07:44.183901 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e9af7a876746b12e4115dd9f87d53f22ac816082e02264699a542040bd929bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.183958 kubelet[2717]: E0310 02:07:44.183949 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e9af7a876746b12e4115dd9f87d53f22ac816082e02264699a542040bd929bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6767cf97dd-rcch5" Mar 10 02:07:44.184014 kubelet[2717]: E0310 02:07:44.183967 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e9af7a876746b12e4115dd9f87d53f22ac816082e02264699a542040bd929bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6767cf97dd-rcch5" Mar 10 02:07:44.184163 kubelet[2717]: E0310 02:07:44.184010 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6767cf97dd-rcch5_calico-system(0277ffd5-4e59-4cdc-87b4-bb70cff55e14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6767cf97dd-rcch5_calico-system(0277ffd5-4e59-4cdc-87b4-bb70cff55e14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e9af7a876746b12e4115dd9f87d53f22ac816082e02264699a542040bd929bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6767cf97dd-rcch5" podUID="0277ffd5-4e59-4cdc-87b4-bb70cff55e14" Mar 10 02:07:44.184639 containerd[1559]: time="2026-03-10T02:07:44.184586085Z" level=error msg="Failed to destroy network for sandbox \"f5165a105e0fecbc91a13c05aaffe40f91c2233cd3489c9bb7756ad25dc7fa20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.186160 containerd[1559]: time="2026-03-10T02:07:44.185779806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-c9bgq,Uid:f4410bc0-61db-48a1-8215-d72b639881fc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5bec18b30fb9f9aafecc12de10af19d68e1e771114577b9e8caa06b5ed09f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.186918 kubelet[2717]: E0310 02:07:44.186839 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5bec18b30fb9f9aafecc12de10af19d68e1e771114577b9e8caa06b5ed09f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.187183 kubelet[2717]: E0310 02:07:44.187153 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5bec18b30fb9f9aafecc12de10af19d68e1e771114577b9e8caa06b5ed09f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-c9bgq" Mar 10 02:07:44.187263 kubelet[2717]: E0310 02:07:44.187234 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5bec18b30fb9f9aafecc12de10af19d68e1e771114577b9e8caa06b5ed09f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-c9bgq" Mar 10 02:07:44.187349 containerd[1559]: time="2026-03-10T02:07:44.187325434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589bb4f5c4-8wzb9,Uid:c6552e7b-3fb9-4a05-88cd-e37203f5405d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788c0b1a33ae00ced03998befb511fb127786a9e2f74ed0ea2c2fe6957f47ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.187472 kubelet[2717]: E0310 02:07:44.187446 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-c9bgq_calico-system(f4410bc0-61db-48a1-8215-d72b639881fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-c9bgq_calico-system(f4410bc0-61db-48a1-8215-d72b639881fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db5bec18b30fb9f9aafecc12de10af19d68e1e771114577b9e8caa06b5ed09f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-c9bgq" podUID="f4410bc0-61db-48a1-8215-d72b639881fc" Mar 10 02:07:44.187667 kubelet[2717]: E0310 02:07:44.187545 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788c0b1a33ae00ced03998befb511fb127786a9e2f74ed0ea2c2fe6957f47ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.187748 kubelet[2717]: E0310 02:07:44.187673 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788c0b1a33ae00ced03998befb511fb127786a9e2f74ed0ea2c2fe6957f47ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589bb4f5c4-8wzb9" Mar 10 02:07:44.187748 kubelet[2717]: E0310 02:07:44.187687 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d788c0b1a33ae00ced03998befb511fb127786a9e2f74ed0ea2c2fe6957f47ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-589bb4f5c4-8wzb9" Mar 10 02:07:44.187748 kubelet[2717]: E0310 02:07:44.187712 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-589bb4f5c4-8wzb9_calico-system(c6552e7b-3fb9-4a05-88cd-e37203f5405d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-589bb4f5c4-8wzb9_calico-system(c6552e7b-3fb9-4a05-88cd-e37203f5405d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d788c0b1a33ae00ced03998befb511fb127786a9e2f74ed0ea2c2fe6957f47ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-589bb4f5c4-8wzb9" podUID="c6552e7b-3fb9-4a05-88cd-e37203f5405d" Mar 10 02:07:44.187941 containerd[1559]: time="2026-03-10T02:07:44.187899996Z" level=error msg="Failed to destroy network for sandbox \"09e01b6e32b856ef39e35c776fda163560f838361964e6cb87c22a038ccedb6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.188966 containerd[1559]: time="2026-03-10T02:07:44.188887702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dd9lc,Uid:95ea2f0e-a57c-4442-a6b9-ffb3fce130b7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5165a105e0fecbc91a13c05aaffe40f91c2233cd3489c9bb7756ad25dc7fa20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.189340 kubelet[2717]: E0310 02:07:44.189272 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5165a105e0fecbc91a13c05aaffe40f91c2233cd3489c9bb7756ad25dc7fa20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.189449 kubelet[2717]: E0310 02:07:44.189339 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5165a105e0fecbc91a13c05aaffe40f91c2233cd3489c9bb7756ad25dc7fa20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dd9lc" Mar 10 02:07:44.189449 kubelet[2717]: E0310 02:07:44.189358 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5165a105e0fecbc91a13c05aaffe40f91c2233cd3489c9bb7756ad25dc7fa20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-dd9lc" Mar 10 02:07:44.189449 kubelet[2717]: E0310 02:07:44.189400 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-dd9lc_kube-system(95ea2f0e-a57c-4442-a6b9-ffb3fce130b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-dd9lc_kube-system(95ea2f0e-a57c-4442-a6b9-ffb3fce130b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5165a105e0fecbc91a13c05aaffe40f91c2233cd3489c9bb7756ad25dc7fa20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-dd9lc" podUID="95ea2f0e-a57c-4442-a6b9-ffb3fce130b7" Mar 10 02:07:44.190396 containerd[1559]: time="2026-03-10T02:07:44.190191194Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-s4z2d,Uid:3804cfdb-f007-453a-9310-c14ad48665c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e01b6e32b856ef39e35c776fda163560f838361964e6cb87c22a038ccedb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.190706 kubelet[2717]: E0310 02:07:44.190661 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e01b6e32b856ef39e35c776fda163560f838361964e6cb87c22a038ccedb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.190706 kubelet[2717]: E0310 02:07:44.190689 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e01b6e32b856ef39e35c776fda163560f838361964e6cb87c22a038ccedb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6767cf97dd-s4z2d" Mar 10 02:07:44.190757 kubelet[2717]: E0310 02:07:44.190706 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09e01b6e32b856ef39e35c776fda163560f838361964e6cb87c22a038ccedb6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6767cf97dd-s4z2d" Mar 10 02:07:44.190783 kubelet[2717]: E0310 02:07:44.190770 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6767cf97dd-s4z2d_calico-system(3804cfdb-f007-453a-9310-c14ad48665c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6767cf97dd-s4z2d_calico-system(3804cfdb-f007-453a-9310-c14ad48665c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09e01b6e32b856ef39e35c776fda163560f838361964e6cb87c22a038ccedb6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6767cf97dd-s4z2d" podUID="3804cfdb-f007-453a-9310-c14ad48665c1" Mar 10 02:07:44.192397 containerd[1559]: time="2026-03-10T02:07:44.192356723Z" level=error msg="Failed to destroy network for sandbox \"dd89fb7aec52548597b7b965ddcac83ecfd26c23c95f7b7ddd026a9ae7f7d985\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.193563 containerd[1559]: time="2026-03-10T02:07:44.193502005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rqpzt,Uid:c09a44c4-f571-4334-8394-76c9c0007ea5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd89fb7aec52548597b7b965ddcac83ecfd26c23c95f7b7ddd026a9ae7f7d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.193737 kubelet[2717]: E0310 02:07:44.193706 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd89fb7aec52548597b7b965ddcac83ecfd26c23c95f7b7ddd026a9ae7f7d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 02:07:44.193820 kubelet[2717]: E0310 02:07:44.193789 2717 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd89fb7aec52548597b7b965ddcac83ecfd26c23c95f7b7ddd026a9ae7f7d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rqpzt" Mar 10 02:07:44.193820 kubelet[2717]: E0310 02:07:44.193817 2717 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd89fb7aec52548597b7b965ddcac83ecfd26c23c95f7b7ddd026a9ae7f7d985\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rqpzt" Mar 10 02:07:44.194370 kubelet[2717]: E0310 02:07:44.193851 2717 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rqpzt_kube-system(c09a44c4-f571-4334-8394-76c9c0007ea5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rqpzt_kube-system(c09a44c4-f571-4334-8394-76c9c0007ea5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd89fb7aec52548597b7b965ddcac83ecfd26c23c95f7b7ddd026a9ae7f7d985\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rqpzt" podUID="c09a44c4-f571-4334-8394-76c9c0007ea5" Mar 10 02:07:44.294988 containerd[1559]: time="2026-03-10T02:07:44.294464437Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 02:07:44.304366 containerd[1559]: time="2026-03-10T02:07:44.304309674Z" level=info msg="Container 87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:44.312645 containerd[1559]: time="2026-03-10T02:07:44.312577806Z" level=info msg="CreateContainer within sandbox \"0bd7e27f78790776ee62974ad87c6576778329085653eca333160cf2f80f8e63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9\"" Mar 10 02:07:44.313245 containerd[1559]: time="2026-03-10T02:07:44.313168659Z" level=info msg="StartContainer for \"87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9\"" Mar 10 02:07:44.314592 containerd[1559]: time="2026-03-10T02:07:44.314527818Z" level=info msg="connecting to shim 87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9" address="unix:///run/containerd/s/0671ff8fde963ce3e7d528c8fa0f29491f8ba7b0611f5514ef6137573f708cc3" protocol=ttrpc version=3 Mar 10 02:07:44.341209 systemd[1]: Started cri-containerd-87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9.scope - libcontainer container 87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9. Mar 10 02:07:44.434465 containerd[1559]: time="2026-03-10T02:07:44.434426194Z" level=info msg="StartContainer for \"87a68b7f25b5e06d0992825137c151bc79187817597dbee2e24be660c97f66d9\" returns successfully" Mar 10 02:07:44.686550 kubelet[2717]: I0310 02:07:44.686163 2717 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-nginx-config\") pod \"60484490-d829-4e39-bbde-8dc1e78cf934\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " Mar 10 02:07:44.686550 kubelet[2717]: I0310 02:07:44.686437 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "60484490-d829-4e39-bbde-8dc1e78cf934" (UID: "60484490-d829-4e39-bbde-8dc1e78cf934"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 02:07:44.687920 kubelet[2717]: I0310 02:07:44.687009 2717 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-ca-bundle\") pod \"60484490-d829-4e39-bbde-8dc1e78cf934\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " Mar 10 02:07:44.687920 kubelet[2717]: I0310 02:07:44.687119 2717 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-backend-key-pair\") pod \"60484490-d829-4e39-bbde-8dc1e78cf934\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " Mar 10 02:07:44.687920 kubelet[2717]: I0310 02:07:44.687138 2717 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb5g2\" (UniqueName: \"kubernetes.io/projected/60484490-d829-4e39-bbde-8dc1e78cf934-kube-api-access-hb5g2\") pod \"60484490-d829-4e39-bbde-8dc1e78cf934\" (UID: \"60484490-d829-4e39-bbde-8dc1e78cf934\") " Mar 10 02:07:44.687920 kubelet[2717]: I0310 02:07:44.687203 2717 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 10 02:07:44.690530 kubelet[2717]: I0310 02:07:44.690211 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "60484490-d829-4e39-bbde-8dc1e78cf934" (UID: "60484490-d829-4e39-bbde-8dc1e78cf934"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 02:07:44.692929 kubelet[2717]: I0310 02:07:44.692800 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60484490-d829-4e39-bbde-8dc1e78cf934-kube-api-access-hb5g2" (OuterVolumeSpecName: "kube-api-access-hb5g2") pod "60484490-d829-4e39-bbde-8dc1e78cf934" (UID: "60484490-d829-4e39-bbde-8dc1e78cf934"). InnerVolumeSpecName "kube-api-access-hb5g2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 02:07:44.695843 kubelet[2717]: I0310 02:07:44.695792 2717 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "60484490-d829-4e39-bbde-8dc1e78cf934" (UID: "60484490-d829-4e39-bbde-8dc1e78cf934"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 02:07:44.788436 kubelet[2717]: I0310 02:07:44.788387 2717 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 10 02:07:44.788436 kubelet[2717]: I0310 02:07:44.788436 2717 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/60484490-d829-4e39-bbde-8dc1e78cf934-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 10 02:07:44.788436 kubelet[2717]: I0310 02:07:44.788452 2717 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hb5g2\" (UniqueName: \"kubernetes.io/projected/60484490-d829-4e39-bbde-8dc1e78cf934-kube-api-access-hb5g2\") on node \"localhost\" DevicePath \"\"" Mar 10 02:07:44.892682 systemd[1]: var-lib-kubelet-pods-60484490\x2dd829\x2d4e39\x2dbbde\x2d8dc1e78cf934-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhb5g2.mount: Deactivated successfully. Mar 10 02:07:44.892810 systemd[1]: var-lib-kubelet-pods-60484490\x2dd829\x2d4e39\x2dbbde\x2d8dc1e78cf934-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 02:07:45.200201 systemd[1]: Created slice kubepods-besteffort-podf45461cc_b942_4f36_8ecd_98b10d5a677a.slice - libcontainer container kubepods-besteffort-podf45461cc_b942_4f36_8ecd_98b10d5a677a.slice. Mar 10 02:07:45.201444 systemd[1]: Removed slice kubepods-besteffort-pod60484490_d829_4e39_bbde_8dc1e78cf934.slice - libcontainer container kubepods-besteffort-pod60484490_d829_4e39_bbde_8dc1e78cf934.slice. Mar 10 02:07:45.205900 containerd[1559]: time="2026-03-10T02:07:45.205807695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rxdl,Uid:f45461cc-b942-4f36-8ecd-98b10d5a677a,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:45.307099 kubelet[2717]: I0310 02:07:45.306934 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vhblz" podStartSLOduration=3.827549169 podStartE2EDuration="15.306920714s" podCreationTimestamp="2026-03-10 02:07:30 +0000 UTC" firstStartedPulling="2026-03-10 02:07:31.335604628 +0000 UTC m=+16.239838638" lastFinishedPulling="2026-03-10 02:07:42.814976173 +0000 UTC m=+27.719210183" observedRunningTime="2026-03-10 02:07:45.305967425 +0000 UTC m=+30.210201435" watchObservedRunningTime="2026-03-10 02:07:45.306920714 +0000 UTC m=+30.211154724" Mar 10 02:07:45.335669 systemd-networkd[1459]: cali5b152282075: Link UP Mar 10 02:07:45.336338 systemd-networkd[1459]: cali5b152282075: Gained carrier Mar 10 02:07:45.353988 containerd[1559]: 2026-03-10 02:07:45.230 [ERROR][3827] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:07:45.353988 containerd[1559]: 2026-03-10 02:07:45.251 [INFO][3827] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9rxdl-eth0 csi-node-driver- calico-system f45461cc-b942-4f36-8ecd-98b10d5a677a 722 0 2026-03-10 02:07:30 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9rxdl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5b152282075 [] [] }} ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-" Mar 10 02:07:45.353988 containerd[1559]: 2026-03-10 02:07:45.251 [INFO][3827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.353988 containerd[1559]: 2026-03-10 02:07:45.276 [INFO][3841] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" HandleID="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Workload="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.282 [INFO][3841] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" HandleID="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Workload="localhost-k8s-csi--node--driver--9rxdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9rxdl", "timestamp":"2026-03-10 02:07:45.276560959 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005f6420)} Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.282 [INFO][3841] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.282 [INFO][3841] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.282 [INFO][3841] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.285 [INFO][3841] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" host="localhost" Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.290 [INFO][3841] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.296 [INFO][3841] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.297 [INFO][3841] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.300 [INFO][3841] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:45.354431 containerd[1559]: 2026-03-10 02:07:45.300 [INFO][3841] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" host="localhost" Mar 10 02:07:45.354646 containerd[1559]: 2026-03-10 02:07:45.301 [INFO][3841] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6 Mar 10 02:07:45.354646 containerd[1559]: 2026-03-10 02:07:45.308 [INFO][3841] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" host="localhost" Mar 10 02:07:45.354646 containerd[1559]: 2026-03-10 02:07:45.315 [INFO][3841] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" host="localhost" Mar 10 02:07:45.354646 containerd[1559]: 2026-03-10 02:07:45.315 [INFO][3841] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" host="localhost" Mar 10 02:07:45.354646 containerd[1559]: 2026-03-10 02:07:45.316 [INFO][3841] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:45.354646 containerd[1559]: 2026-03-10 02:07:45.316 [INFO][3841] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" HandleID="k8s-pod-network.8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Workload="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.354749 containerd[1559]: 2026-03-10 02:07:45.320 [INFO][3827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9rxdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f45461cc-b942-4f36-8ecd-98b10d5a677a", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9rxdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b152282075", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:45.354816 containerd[1559]: 2026-03-10 02:07:45.321 [INFO][3827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.354816 containerd[1559]: 2026-03-10 02:07:45.322 [INFO][3827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b152282075 ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.354816 containerd[1559]: 2026-03-10 02:07:45.337 [INFO][3827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.354872 containerd[1559]: 2026-03-10 02:07:45.338 [INFO][3827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9rxdl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f45461cc-b942-4f36-8ecd-98b10d5a677a", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6", Pod:"csi-node-driver-9rxdl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b152282075", MAC:"aa:81:40:1d:1d:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:45.354935 containerd[1559]: 2026-03-10 02:07:45.346 [INFO][3827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" Namespace="calico-system" Pod="csi-node-driver-9rxdl" WorkloadEndpoint="localhost-k8s-csi--node--driver--9rxdl-eth0" Mar 10 02:07:45.381846 systemd[1]: Created slice kubepods-besteffort-pod4e126a8a_4c0d_4634_b6b2_888c620631b3.slice - libcontainer container kubepods-besteffort-pod4e126a8a_4c0d_4634_b6b2_888c620631b3.slice. Mar 10 02:07:45.388298 containerd[1559]: time="2026-03-10T02:07:45.388144290Z" level=info msg="connecting to shim 8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6" address="unix:///run/containerd/s/9c4a565a4c7b019c18d5b8876aa5c0e145c65402c0dd9172e41dd99e996da330" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:45.392247 kubelet[2717]: I0310 02:07:45.392210 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4e126a8a-4c0d-4634-b6b2-888c620631b3-nginx-config\") pod \"whisker-86c85db795-6qz6g\" (UID: \"4e126a8a-4c0d-4634-b6b2-888c620631b3\") " pod="calico-system/whisker-86c85db795-6qz6g" Mar 10 02:07:45.393085 kubelet[2717]: I0310 02:07:45.393011 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bxj\" (UniqueName: \"kubernetes.io/projected/4e126a8a-4c0d-4634-b6b2-888c620631b3-kube-api-access-k7bxj\") pod \"whisker-86c85db795-6qz6g\" (UID: \"4e126a8a-4c0d-4634-b6b2-888c620631b3\") " pod="calico-system/whisker-86c85db795-6qz6g" Mar 10 02:07:45.393253 kubelet[2717]: I0310 02:07:45.393174 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e126a8a-4c0d-4634-b6b2-888c620631b3-whisker-ca-bundle\") pod \"whisker-86c85db795-6qz6g\" (UID: \"4e126a8a-4c0d-4634-b6b2-888c620631b3\") " pod="calico-system/whisker-86c85db795-6qz6g" Mar 10 02:07:45.393253 kubelet[2717]: I0310 02:07:45.393208 2717 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4e126a8a-4c0d-4634-b6b2-888c620631b3-whisker-backend-key-pair\") pod \"whisker-86c85db795-6qz6g\" (UID: \"4e126a8a-4c0d-4634-b6b2-888c620631b3\") " pod="calico-system/whisker-86c85db795-6qz6g" Mar 10 02:07:45.414229 systemd[1]: Started cri-containerd-8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6.scope - libcontainer container 8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6. Mar 10 02:07:45.426565 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:45.445458 containerd[1559]: time="2026-03-10T02:07:45.445430813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9rxdl,Uid:f45461cc-b942-4f36-8ecd-98b10d5a677a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6\"" Mar 10 02:07:45.447480 containerd[1559]: time="2026-03-10T02:07:45.447451124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 02:07:45.690691 containerd[1559]: time="2026-03-10T02:07:45.690656381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c85db795-6qz6g,Uid:4e126a8a-4c0d-4634-b6b2-888c620631b3,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:45.788644 systemd-networkd[1459]: cali014294c8b1d: Link UP Mar 10 02:07:45.788873 systemd-networkd[1459]: cali014294c8b1d: Gained carrier Mar 10 02:07:45.802043 containerd[1559]: 2026-03-10 02:07:45.714 [ERROR][3906] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 02:07:45.802043 containerd[1559]: 2026-03-10 02:07:45.725 [INFO][3906] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--86c85db795--6qz6g-eth0 whisker-86c85db795- calico-system 4e126a8a-4c0d-4634-b6b2-888c620631b3 911 0 2026-03-10 02:07:45 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:86c85db795 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-86c85db795-6qz6g eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali014294c8b1d [] [] }} ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-" Mar 10 02:07:45.802043 containerd[1559]: 2026-03-10 02:07:45.725 [INFO][3906] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.802043 containerd[1559]: 2026-03-10 02:07:45.750 [INFO][3920] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" HandleID="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Workload="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.756 [INFO][3920] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" HandleID="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Workload="localhost-k8s-whisker--86c85db795--6qz6g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047de50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-86c85db795-6qz6g", "timestamp":"2026-03-10 02:07:45.750744643 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017c580)} Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.756 [INFO][3920] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.756 [INFO][3920] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.756 [INFO][3920] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.759 [INFO][3920] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" host="localhost" Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.764 [INFO][3920] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.768 [INFO][3920] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.770 [INFO][3920] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.772 [INFO][3920] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:45.802290 containerd[1559]: 2026-03-10 02:07:45.772 [INFO][3920] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" host="localhost" Mar 10 02:07:45.802500 containerd[1559]: 2026-03-10 02:07:45.774 [INFO][3920] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8 Mar 10 02:07:45.802500 containerd[1559]: 2026-03-10 02:07:45.779 [INFO][3920] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" host="localhost" Mar 10 02:07:45.802500 containerd[1559]: 2026-03-10 02:07:45.783 [INFO][3920] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" host="localhost" Mar 10 02:07:45.802500 containerd[1559]: 2026-03-10 02:07:45.783 [INFO][3920] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" host="localhost" Mar 10 02:07:45.802500 containerd[1559]: 2026-03-10 02:07:45.783 [INFO][3920] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:45.802500 containerd[1559]: 2026-03-10 02:07:45.784 [INFO][3920] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" HandleID="k8s-pod-network.15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Workload="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.802607 containerd[1559]: 2026-03-10 02:07:45.786 [INFO][3906] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86c85db795--6qz6g-eth0", GenerateName:"whisker-86c85db795-", Namespace:"calico-system", SelfLink:"", UID:"4e126a8a-4c0d-4634-b6b2-888c620631b3", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c85db795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-86c85db795-6qz6g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali014294c8b1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:45.802607 containerd[1559]: 2026-03-10 02:07:45.786 [INFO][3906] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.802730 containerd[1559]: 2026-03-10 02:07:45.786 [INFO][3906] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali014294c8b1d ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.802730 containerd[1559]: 2026-03-10 02:07:45.788 [INFO][3906] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.802767 containerd[1559]: 2026-03-10 02:07:45.788 [INFO][3906] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--86c85db795--6qz6g-eth0", GenerateName:"whisker-86c85db795-", Namespace:"calico-system", SelfLink:"", UID:"4e126a8a-4c0d-4634-b6b2-888c620631b3", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"86c85db795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8", Pod:"whisker-86c85db795-6qz6g", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali014294c8b1d", MAC:"56:85:d7:af:52:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:45.802828 containerd[1559]: 2026-03-10 02:07:45.799 [INFO][3906] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" Namespace="calico-system" Pod="whisker-86c85db795-6qz6g" WorkloadEndpoint="localhost-k8s-whisker--86c85db795--6qz6g-eth0" Mar 10 02:07:45.821864 containerd[1559]: time="2026-03-10T02:07:45.821817771Z" level=info msg="connecting to shim 15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8" address="unix:///run/containerd/s/409a60c6ecfe5f06b4f15b4c552c8f155097c107376c4f24ff332e95add20bba" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:45.850732 systemd[1]: Started cri-containerd-15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8.scope - libcontainer container 15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8. Mar 10 02:07:45.883185 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:45.959825 containerd[1559]: time="2026-03-10T02:07:45.959509197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-86c85db795-6qz6g,Uid:4e126a8a-4c0d-4634-b6b2-888c620631b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8\"" Mar 10 02:07:46.229849 containerd[1559]: time="2026-03-10T02:07:46.229717587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:46.231421 containerd[1559]: time="2026-03-10T02:07:46.230712711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 10 02:07:46.231940 containerd[1559]: time="2026-03-10T02:07:46.231920168Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:46.234307 containerd[1559]: time="2026-03-10T02:07:46.234286505Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:46.234938 containerd[1559]: time="2026-03-10T02:07:46.234888343Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 787.413905ms" Mar 10 02:07:46.234938 containerd[1559]: time="2026-03-10T02:07:46.234931484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 10 02:07:46.236433 containerd[1559]: time="2026-03-10T02:07:46.236247855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 02:07:46.242389 containerd[1559]: time="2026-03-10T02:07:46.242366291Z" level=info msg="CreateContainer within sandbox \"8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 02:07:46.258203 containerd[1559]: time="2026-03-10T02:07:46.257511110Z" level=info msg="Container 31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:46.262336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865023643.mount: Deactivated successfully. Mar 10 02:07:46.268358 containerd[1559]: time="2026-03-10T02:07:46.268303137Z" level=info msg="CreateContainer within sandbox \"8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179\"" Mar 10 02:07:46.269094 containerd[1559]: time="2026-03-10T02:07:46.268876130Z" level=info msg="StartContainer for \"31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179\"" Mar 10 02:07:46.270422 containerd[1559]: time="2026-03-10T02:07:46.270350667Z" level=info msg="connecting to shim 31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179" address="unix:///run/containerd/s/9c4a565a4c7b019c18d5b8876aa5c0e145c65402c0dd9172e41dd99e996da330" protocol=ttrpc version=3 Mar 10 02:07:46.315426 systemd[1]: Started cri-containerd-31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179.scope - libcontainer container 31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179. Mar 10 02:07:46.325673 kubelet[2717]: I0310 02:07:46.325579 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:07:46.453404 containerd[1559]: time="2026-03-10T02:07:46.453363291Z" level=info msg="StartContainer for \"31dc7a976efbb607a918fce379f86948be0988f0845f465f80a4dde89b209179\" returns successfully" Mar 10 02:07:46.741305 systemd-networkd[1459]: vxlan.calico: Link UP Mar 10 02:07:46.741314 systemd-networkd[1459]: vxlan.calico: Gained carrier Mar 10 02:07:46.837354 containerd[1559]: time="2026-03-10T02:07:46.837291945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:46.838136 containerd[1559]: time="2026-03-10T02:07:46.838107664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 10 02:07:46.839338 containerd[1559]: time="2026-03-10T02:07:46.839311767Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:46.841496 containerd[1559]: time="2026-03-10T02:07:46.841455615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:46.842106 containerd[1559]: time="2026-03-10T02:07:46.842011278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 605.737795ms" Mar 10 02:07:46.842155 containerd[1559]: time="2026-03-10T02:07:46.842128016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 10 02:07:46.844605 containerd[1559]: time="2026-03-10T02:07:46.843611133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 02:07:46.848494 containerd[1559]: time="2026-03-10T02:07:46.848412424Z" level=info msg="CreateContainer within sandbox \"15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 02:07:46.859179 containerd[1559]: time="2026-03-10T02:07:46.858019809Z" level=info msg="Container aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:46.867740 containerd[1559]: time="2026-03-10T02:07:46.867676541Z" level=info msg="CreateContainer within sandbox \"15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5\"" Mar 10 02:07:46.868978 containerd[1559]: time="2026-03-10T02:07:46.868733630Z" level=info msg="StartContainer for \"aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5\"" Mar 10 02:07:46.871563 containerd[1559]: time="2026-03-10T02:07:46.871482228Z" level=info msg="connecting to shim aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5" address="unix:///run/containerd/s/409a60c6ecfe5f06b4f15b4c552c8f155097c107376c4f24ff332e95add20bba" protocol=ttrpc version=3 Mar 10 02:07:46.903238 systemd[1]: Started cri-containerd-aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5.scope - libcontainer container aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5. Mar 10 02:07:46.958090 containerd[1559]: time="2026-03-10T02:07:46.957962637Z" level=info msg="StartContainer for \"aff0db774675cbdf45df440c988f833e680bc117426f6152aed7b39262aa62b5\" returns successfully" Mar 10 02:07:47.077406 systemd-networkd[1459]: cali5b152282075: Gained IPv6LL Mar 10 02:07:47.195928 kubelet[2717]: I0310 02:07:47.195857 2717 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60484490-d829-4e39-bbde-8dc1e78cf934" path="/var/lib/kubelet/pods/60484490-d829-4e39-bbde-8dc1e78cf934/volumes" Mar 10 02:07:47.459249 systemd-networkd[1459]: cali014294c8b1d: Gained IPv6LL Mar 10 02:07:47.608186 containerd[1559]: time="2026-03-10T02:07:47.608001140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:47.608797 containerd[1559]: time="2026-03-10T02:07:47.608692843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 10 02:07:47.609936 containerd[1559]: time="2026-03-10T02:07:47.609893830Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:47.612204 containerd[1559]: time="2026-03-10T02:07:47.612149825Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:47.612738 containerd[1559]: time="2026-03-10T02:07:47.612700958Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 769.061142ms" Mar 10 02:07:47.612738 containerd[1559]: time="2026-03-10T02:07:47.612735302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 10 02:07:47.614012 containerd[1559]: time="2026-03-10T02:07:47.613761204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 02:07:47.617245 containerd[1559]: time="2026-03-10T02:07:47.617007513Z" level=info msg="CreateContainer within sandbox \"8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 02:07:47.628390 containerd[1559]: time="2026-03-10T02:07:47.628321584Z" level=info msg="Container e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:47.636755 containerd[1559]: time="2026-03-10T02:07:47.636689478Z" level=info msg="CreateContainer within sandbox \"8fa725b23806a884db77b6f31c428940d3401763c781efcf3a627ed6417ba6e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3\"" Mar 10 02:07:47.637451 containerd[1559]: time="2026-03-10T02:07:47.637407555Z" level=info msg="StartContainer for \"e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3\"" Mar 10 02:07:47.638696 containerd[1559]: time="2026-03-10T02:07:47.638663704Z" level=info msg="connecting to shim e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3" address="unix:///run/containerd/s/9c4a565a4c7b019c18d5b8876aa5c0e145c65402c0dd9172e41dd99e996da330" protocol=ttrpc version=3 Mar 10 02:07:47.672304 systemd[1]: Started cri-containerd-e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3.scope - libcontainer container e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3. Mar 10 02:07:47.764261 containerd[1559]: time="2026-03-10T02:07:47.764117639Z" level=info msg="StartContainer for \"e4ee89f87a1dcb1e232a8ac845c4f2b4c0b25aa8ddd2753880ecf056809a79b3\" returns successfully" Mar 10 02:07:47.907280 systemd-networkd[1459]: vxlan.calico: Gained IPv6LL Mar 10 02:07:48.245100 kubelet[2717]: I0310 02:07:48.244988 2717 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 02:07:48.246275 kubelet[2717]: I0310 02:07:48.246224 2717 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 02:07:48.354579 kubelet[2717]: I0310 02:07:48.354509 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9rxdl" podStartSLOduration=16.187909061 podStartE2EDuration="18.354495205s" podCreationTimestamp="2026-03-10 02:07:30 +0000 UTC" firstStartedPulling="2026-03-10 02:07:45.447017198 +0000 UTC m=+30.351251208" lastFinishedPulling="2026-03-10 02:07:47.613603342 +0000 UTC m=+32.517837352" observedRunningTime="2026-03-10 02:07:48.351688775 +0000 UTC m=+33.255922775" watchObservedRunningTime="2026-03-10 02:07:48.354495205 +0000 UTC m=+33.258729214" Mar 10 02:07:48.426378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount674994263.mount: Deactivated successfully. Mar 10 02:07:48.445333 containerd[1559]: time="2026-03-10T02:07:48.445257913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:48.446252 containerd[1559]: time="2026-03-10T02:07:48.446200281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 10 02:07:48.447335 containerd[1559]: time="2026-03-10T02:07:48.447291487Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:48.449702 containerd[1559]: time="2026-03-10T02:07:48.449673488Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:48.454664 containerd[1559]: time="2026-03-10T02:07:48.454616979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 840.83093ms" Mar 10 02:07:48.454664 containerd[1559]: time="2026-03-10T02:07:48.454652966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 10 02:07:48.459276 containerd[1559]: time="2026-03-10T02:07:48.459231001Z" level=info msg="CreateContainer within sandbox \"15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 02:07:48.468345 containerd[1559]: time="2026-03-10T02:07:48.468304431Z" level=info msg="Container dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:48.477100 containerd[1559]: time="2026-03-10T02:07:48.477014678Z" level=info msg="CreateContainer within sandbox \"15011e29ace175926814134de81bd399f50e5d76dc086a434390b43bbf77aca8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf\"" Mar 10 02:07:48.477682 containerd[1559]: time="2026-03-10T02:07:48.477656055Z" level=info msg="StartContainer for \"dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf\"" Mar 10 02:07:48.478917 containerd[1559]: time="2026-03-10T02:07:48.478868986Z" level=info msg="connecting to shim dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf" address="unix:///run/containerd/s/409a60c6ecfe5f06b4f15b4c552c8f155097c107376c4f24ff332e95add20bba" protocol=ttrpc version=3 Mar 10 02:07:48.503233 systemd[1]: Started cri-containerd-dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf.scope - libcontainer container dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf. Mar 10 02:07:48.558781 containerd[1559]: time="2026-03-10T02:07:48.558711441Z" level=info msg="StartContainer for \"dc38ad6a9ab6ce89934a24fe5f866d81e3c5a948d9c8b78638e2449356214fcf\" returns successfully" Mar 10 02:07:55.637666 kubelet[2717]: I0310 02:07:55.637591 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:07:55.737437 kubelet[2717]: I0310 02:07:55.737303 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-86c85db795-6qz6g" podStartSLOduration=8.244697464 podStartE2EDuration="10.737289911s" podCreationTimestamp="2026-03-10 02:07:45 +0000 UTC" firstStartedPulling="2026-03-10 02:07:45.962965526 +0000 UTC m=+30.867199535" lastFinishedPulling="2026-03-10 02:07:48.455557972 +0000 UTC m=+33.359791982" observedRunningTime="2026-03-10 02:07:49.354589368 +0000 UTC m=+34.258823378" watchObservedRunningTime="2026-03-10 02:07:55.737289911 +0000 UTC m=+40.641523911" Mar 10 02:07:56.231360 containerd[1559]: time="2026-03-10T02:07:56.231252717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-rcch5,Uid:0277ffd5-4e59-4cdc-87b4-bb70cff55e14,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:56.249664 containerd[1559]: time="2026-03-10T02:07:56.249602745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589bb4f5c4-8wzb9,Uid:c6552e7b-3fb9-4a05-88cd-e37203f5405d,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:56.417732 systemd-networkd[1459]: calife0a246c16c: Link UP Mar 10 02:07:56.419611 systemd-networkd[1459]: calife0a246c16c: Gained carrier Mar 10 02:07:56.444209 containerd[1559]: 2026-03-10 02:07:56.326 [INFO][4437] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0 calico-apiserver-6767cf97dd- calico-system 0277ffd5-4e59-4cdc-87b4-bb70cff55e14 856 0 2026-03-10 02:07:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6767cf97dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6767cf97dd-rcch5 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calife0a246c16c [] [] }} ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-" Mar 10 02:07:56.444209 containerd[1559]: 2026-03-10 02:07:56.326 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.444209 containerd[1559]: 2026-03-10 02:07:56.363 [INFO][4468] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" HandleID="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Workload="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.371 [INFO][4468] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" HandleID="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Workload="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef460), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6767cf97dd-rcch5", "timestamp":"2026-03-10 02:07:56.36380462 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002c8f20)} Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.371 [INFO][4468] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.371 [INFO][4468] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.371 [INFO][4468] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.375 [INFO][4468] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" host="localhost" Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.384 [INFO][4468] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.389 [INFO][4468] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.391 [INFO][4468] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.393 [INFO][4468] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:56.444400 containerd[1559]: 2026-03-10 02:07:56.393 [INFO][4468] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" host="localhost" Mar 10 02:07:56.444638 containerd[1559]: 2026-03-10 02:07:56.395 [INFO][4468] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc Mar 10 02:07:56.444638 containerd[1559]: 2026-03-10 02:07:56.399 [INFO][4468] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" host="localhost" Mar 10 02:07:56.444638 containerd[1559]: 2026-03-10 02:07:56.406 [INFO][4468] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" host="localhost" Mar 10 02:07:56.444638 containerd[1559]: 2026-03-10 02:07:56.406 [INFO][4468] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" host="localhost" Mar 10 02:07:56.444638 containerd[1559]: 2026-03-10 02:07:56.406 [INFO][4468] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:56.444638 containerd[1559]: 2026-03-10 02:07:56.406 [INFO][4468] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" HandleID="k8s-pod-network.c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Workload="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.444748 containerd[1559]: 2026-03-10 02:07:56.409 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0", GenerateName:"calico-apiserver-6767cf97dd-", Namespace:"calico-system", SelfLink:"", UID:"0277ffd5-4e59-4cdc-87b4-bb70cff55e14", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6767cf97dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6767cf97dd-rcch5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calife0a246c16c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:56.444818 containerd[1559]: 2026-03-10 02:07:56.409 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.444818 containerd[1559]: 2026-03-10 02:07:56.409 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife0a246c16c ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.444818 containerd[1559]: 2026-03-10 02:07:56.421 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.444922 containerd[1559]: 2026-03-10 02:07:56.426 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0", GenerateName:"calico-apiserver-6767cf97dd-", Namespace:"calico-system", SelfLink:"", UID:"0277ffd5-4e59-4cdc-87b4-bb70cff55e14", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6767cf97dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc", Pod:"calico-apiserver-6767cf97dd-rcch5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calife0a246c16c", MAC:"de:3d:06:16:76:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:56.444985 containerd[1559]: 2026-03-10 02:07:56.438 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-rcch5" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--rcch5-eth0" Mar 10 02:07:56.476806 containerd[1559]: time="2026-03-10T02:07:56.476729691Z" level=info msg="connecting to shim c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc" address="unix:///run/containerd/s/c0ebecc548b2fbcb263111ac0697deea8eb1d1e3ebd137ffa8684717605d1fba" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:56.518258 systemd[1]: Started cri-containerd-c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc.scope - libcontainer container c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc. Mar 10 02:07:56.526626 systemd-networkd[1459]: calicc0310c306a: Link UP Mar 10 02:07:56.530409 systemd-networkd[1459]: calicc0310c306a: Gained carrier Mar 10 02:07:56.546466 containerd[1559]: 2026-03-10 02:07:56.333 [INFO][4445] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0 calico-kube-controllers-589bb4f5c4- calico-system c6552e7b-3fb9-4a05-88cd-e37203f5405d 853 0 2026-03-10 02:07:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:589bb4f5c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-589bb4f5c4-8wzb9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicc0310c306a [] [] }} ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-" Mar 10 02:07:56.546466 containerd[1559]: 2026-03-10 02:07:56.334 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.546466 containerd[1559]: 2026-03-10 02:07:56.372 [INFO][4474] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" HandleID="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Workload="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.380 [INFO][4474] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" HandleID="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Workload="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fac0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-589bb4f5c4-8wzb9", "timestamp":"2026-03-10 02:07:56.372717776 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000591080)} Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.380 [INFO][4474] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.406 [INFO][4474] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.406 [INFO][4474] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.478 [INFO][4474] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" host="localhost" Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.488 [INFO][4474] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.498 [INFO][4474] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.500 [INFO][4474] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:56.546682 containerd[1559]: 2026-03-10 02:07:56.502 [INFO][4474] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.502 [INFO][4474] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" host="localhost" Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.504 [INFO][4474] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159 Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.510 [INFO][4474] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" host="localhost" Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.516 [INFO][4474] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" host="localhost" Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.517 [INFO][4474] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" host="localhost" Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.517 [INFO][4474] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:56.546880 containerd[1559]: 2026-03-10 02:07:56.517 [INFO][4474] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" HandleID="k8s-pod-network.73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Workload="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.547005 containerd[1559]: 2026-03-10 02:07:56.520 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0", GenerateName:"calico-kube-controllers-589bb4f5c4-", Namespace:"calico-system", SelfLink:"", UID:"c6552e7b-3fb9-4a05-88cd-e37203f5405d", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589bb4f5c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-589bb4f5c4-8wzb9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc0310c306a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:56.547112 containerd[1559]: 2026-03-10 02:07:56.520 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.547112 containerd[1559]: 2026-03-10 02:07:56.520 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc0310c306a ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.547112 containerd[1559]: 2026-03-10 02:07:56.531 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.547185 containerd[1559]: 2026-03-10 02:07:56.532 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0", GenerateName:"calico-kube-controllers-589bb4f5c4-", Namespace:"calico-system", SelfLink:"", UID:"c6552e7b-3fb9-4a05-88cd-e37203f5405d", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"589bb4f5c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159", Pod:"calico-kube-controllers-589bb4f5c4-8wzb9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicc0310c306a", MAC:"d6:9b:7d:b8:95:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:56.547246 containerd[1559]: 2026-03-10 02:07:56.541 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" Namespace="calico-system" Pod="calico-kube-controllers-589bb4f5c4-8wzb9" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--589bb4f5c4--8wzb9-eth0" Mar 10 02:07:56.551943 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:56.579798 containerd[1559]: time="2026-03-10T02:07:56.579701100Z" level=info msg="connecting to shim 73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159" address="unix:///run/containerd/s/4ab10e232b7dfdd7d1eaa9c1bc5947fc9931e9c67d7dda7e0659e24a1dd3927d" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:56.618255 systemd[1]: Started cri-containerd-73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159.scope - libcontainer container 73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159. Mar 10 02:07:56.619775 containerd[1559]: time="2026-03-10T02:07:56.619741055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-rcch5,Uid:0277ffd5-4e59-4cdc-87b4-bb70cff55e14,Namespace:calico-system,Attempt:0,} returns sandbox id \"c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc\"" Mar 10 02:07:56.624103 containerd[1559]: time="2026-03-10T02:07:56.623475229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 02:07:56.645267 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:56.689026 containerd[1559]: time="2026-03-10T02:07:56.688969933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-589bb4f5c4-8wzb9,Uid:c6552e7b-3fb9-4a05-88cd-e37203f5405d,Namespace:calico-system,Attempt:0,} returns sandbox id \"73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159\"" Mar 10 02:07:57.204850 containerd[1559]: time="2026-03-10T02:07:57.204794437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-c9bgq,Uid:f4410bc0-61db-48a1-8215-d72b639881fc,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:57.355864 systemd-networkd[1459]: cali2c01a54ba32: Link UP Mar 10 02:07:57.358995 systemd-networkd[1459]: cali2c01a54ba32: Gained carrier Mar 10 02:07:57.378331 containerd[1559]: 2026-03-10 02:07:57.250 [INFO][4638] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0 goldmane-cccfbd5cf- calico-system f4410bc0-61db-48a1-8215-d72b639881fc 855 0 2026-03-10 02:07:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-c9bgq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2c01a54ba32 [] [] }} ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-" Mar 10 02:07:57.378331 containerd[1559]: 2026-03-10 02:07:57.250 [INFO][4638] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.378331 containerd[1559]: 2026-03-10 02:07:57.286 [INFO][4654] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" HandleID="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Workload="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.298 [INFO][4654] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" HandleID="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Workload="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-c9bgq", "timestamp":"2026-03-10 02:07:57.2866633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003518c0)} Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.298 [INFO][4654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.298 [INFO][4654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.298 [INFO][4654] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.301 [INFO][4654] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" host="localhost" Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.311 [INFO][4654] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.317 [INFO][4654] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.321 [INFO][4654] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.327 [INFO][4654] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:57.378774 containerd[1559]: 2026-03-10 02:07:57.327 [INFO][4654] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" host="localhost" Mar 10 02:07:57.378999 containerd[1559]: 2026-03-10 02:07:57.329 [INFO][4654] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960 Mar 10 02:07:57.378999 containerd[1559]: 2026-03-10 02:07:57.334 [INFO][4654] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" host="localhost" Mar 10 02:07:57.378999 containerd[1559]: 2026-03-10 02:07:57.341 [INFO][4654] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" host="localhost" Mar 10 02:07:57.378999 containerd[1559]: 2026-03-10 02:07:57.341 [INFO][4654] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" host="localhost" Mar 10 02:07:57.378999 containerd[1559]: 2026-03-10 02:07:57.341 [INFO][4654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:57.378999 containerd[1559]: 2026-03-10 02:07:57.341 [INFO][4654] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" HandleID="k8s-pod-network.e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Workload="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.379189 containerd[1559]: 2026-03-10 02:07:57.350 [INFO][4638] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"f4410bc0-61db-48a1-8215-d72b639881fc", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-c9bgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2c01a54ba32", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:57.379189 containerd[1559]: 2026-03-10 02:07:57.352 [INFO][4638] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.379271 containerd[1559]: 2026-03-10 02:07:57.352 [INFO][4638] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c01a54ba32 ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.379271 containerd[1559]: 2026-03-10 02:07:57.358 [INFO][4638] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.379313 containerd[1559]: 2026-03-10 02:07:57.358 [INFO][4638] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"f4410bc0-61db-48a1-8215-d72b639881fc", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960", Pod:"goldmane-cccfbd5cf-c9bgq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2c01a54ba32", MAC:"1e:00:02:4f:18:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:57.379372 containerd[1559]: 2026-03-10 02:07:57.374 [INFO][4638] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" Namespace="calico-system" Pod="goldmane-cccfbd5cf-c9bgq" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--c9bgq-eth0" Mar 10 02:07:57.416297 containerd[1559]: time="2026-03-10T02:07:57.416229967Z" level=info msg="connecting to shim e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960" address="unix:///run/containerd/s/e046a536e0470a53e6c5d4cb7794e2f0af6e116289dae557b8aa16c2ca996114" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:57.465422 systemd[1]: Started cri-containerd-e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960.scope - libcontainer container e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960. Mar 10 02:07:57.497779 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:57.555201 containerd[1559]: time="2026-03-10T02:07:57.554985149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-c9bgq,Uid:f4410bc0-61db-48a1-8215-d72b639881fc,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960\"" Mar 10 02:07:58.083310 systemd-networkd[1459]: calife0a246c16c: Gained IPv6LL Mar 10 02:07:58.243698 containerd[1559]: time="2026-03-10T02:07:58.243575372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rqpzt,Uid:c09a44c4-f571-4334-8394-76c9c0007ea5,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:58.246513 containerd[1559]: time="2026-03-10T02:07:58.246464508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dd9lc,Uid:95ea2f0e-a57c-4442-a6b9-ffb3fce130b7,Namespace:kube-system,Attempt:0,}" Mar 10 02:07:58.295754 containerd[1559]: time="2026-03-10T02:07:58.295668432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:58.296805 containerd[1559]: time="2026-03-10T02:07:58.296761820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 10 02:07:58.298231 containerd[1559]: time="2026-03-10T02:07:58.298176498Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:58.303781 containerd[1559]: time="2026-03-10T02:07:58.303727806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:07:58.305649 containerd[1559]: time="2026-03-10T02:07:58.305532023Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.681876647s" Mar 10 02:07:58.306370 containerd[1559]: time="2026-03-10T02:07:58.305973095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 02:07:58.308108 containerd[1559]: time="2026-03-10T02:07:58.307877145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 02:07:58.310818 systemd[1]: Started sshd@7-10.0.0.149:22-10.0.0.1:53108.service - OpenSSH per-connection server daemon (10.0.0.1:53108). Mar 10 02:07:58.311797 containerd[1559]: time="2026-03-10T02:07:58.311772545Z" level=info msg="CreateContainer within sandbox \"c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 02:07:58.348222 containerd[1559]: time="2026-03-10T02:07:58.348036412Z" level=info msg="Container 5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:58.373140 containerd[1559]: time="2026-03-10T02:07:58.369964328Z" level=info msg="CreateContainer within sandbox \"c50db611ae1c47b010a1d4572b502e2e2af9a33ef81bd2589018892a822c85dc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e\"" Mar 10 02:07:58.373140 containerd[1559]: time="2026-03-10T02:07:58.372882439Z" level=info msg="StartContainer for \"5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e\"" Mar 10 02:07:58.375533 containerd[1559]: time="2026-03-10T02:07:58.375462438Z" level=info msg="connecting to shim 5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e" address="unix:///run/containerd/s/c0ebecc548b2fbcb263111ac0697deea8eb1d1e3ebd137ffa8684717605d1fba" protocol=ttrpc version=3 Mar 10 02:07:58.405573 systemd[1]: Started cri-containerd-5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e.scope - libcontainer container 5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e. Mar 10 02:07:58.427693 sshd[4773]: Accepted publickey for core from 10.0.0.1 port 53108 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:07:58.428697 sshd-session[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:07:58.442496 systemd-logind[1546]: New session 8 of user core. Mar 10 02:07:58.445393 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 02:07:58.500838 containerd[1559]: time="2026-03-10T02:07:58.500747940Z" level=info msg="StartContainer for \"5028ac184925c34f0b7308891ebc3408080998d6eeca4daca8b2d062e168d92e\" returns successfully" Mar 10 02:07:58.531578 systemd-networkd[1459]: calicc0310c306a: Gained IPv6LL Mar 10 02:07:58.547273 systemd-networkd[1459]: cali92d008306ca: Link UP Mar 10 02:07:58.550601 systemd-networkd[1459]: cali92d008306ca: Gained carrier Mar 10 02:07:58.571122 containerd[1559]: 2026-03-10 02:07:58.343 [INFO][4742] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--rqpzt-eth0 coredns-66bc5c9577- kube-system c09a44c4-f571-4334-8394-76c9c0007ea5 848 0 2026-03-10 02:07:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-rqpzt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali92d008306ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-" Mar 10 02:07:58.571122 containerd[1559]: 2026-03-10 02:07:58.343 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.571122 containerd[1559]: 2026-03-10 02:07:58.408 [INFO][4781] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" HandleID="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Workload="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.427 [INFO][4781] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" HandleID="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Workload="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-rqpzt", "timestamp":"2026-03-10 02:07:58.40822164 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001866e0)} Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.427 [INFO][4781] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.427 [INFO][4781] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.428 [INFO][4781] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.436 [INFO][4781] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" host="localhost" Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.464 [INFO][4781] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.496 [INFO][4781] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.506 [INFO][4781] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.512 [INFO][4781] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:58.571387 containerd[1559]: 2026-03-10 02:07:58.513 [INFO][4781] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" host="localhost" Mar 10 02:07:58.571618 containerd[1559]: 2026-03-10 02:07:58.517 [INFO][4781] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e Mar 10 02:07:58.571618 containerd[1559]: 2026-03-10 02:07:58.522 [INFO][4781] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" host="localhost" Mar 10 02:07:58.571618 containerd[1559]: 2026-03-10 02:07:58.531 [INFO][4781] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" host="localhost" Mar 10 02:07:58.571618 containerd[1559]: 2026-03-10 02:07:58.531 [INFO][4781] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" host="localhost" Mar 10 02:07:58.571618 containerd[1559]: 2026-03-10 02:07:58.531 [INFO][4781] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:58.571618 containerd[1559]: 2026-03-10 02:07:58.531 [INFO][4781] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" HandleID="k8s-pod-network.927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Workload="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.571730 containerd[1559]: 2026-03-10 02:07:58.539 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rqpzt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c09a44c4-f571-4334-8394-76c9c0007ea5", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-rqpzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92d008306ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:58.571730 containerd[1559]: 2026-03-10 02:07:58.540 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.571730 containerd[1559]: 2026-03-10 02:07:58.542 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92d008306ca ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.571730 containerd[1559]: 2026-03-10 02:07:58.545 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.571730 containerd[1559]: 2026-03-10 02:07:58.550 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--rqpzt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"c09a44c4-f571-4334-8394-76c9c0007ea5", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e", Pod:"coredns-66bc5c9577-rqpzt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali92d008306ca", MAC:"46:68:f5:be:7c:d8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:58.571730 containerd[1559]: 2026-03-10 02:07:58.563 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" Namespace="kube-system" Pod="coredns-66bc5c9577-rqpzt" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--rqpzt-eth0" Mar 10 02:07:58.614736 systemd-networkd[1459]: cali2ed5e8e0f22: Link UP Mar 10 02:07:58.617375 systemd-networkd[1459]: cali2ed5e8e0f22: Gained carrier Mar 10 02:07:58.653018 sshd[4817]: Connection closed by 10.0.0.1 port 53108 Mar 10 02:07:58.654391 sshd-session[4773]: pam_unix(sshd:session): session closed for user core Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.367 [INFO][4744] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--dd9lc-eth0 coredns-66bc5c9577- kube-system 95ea2f0e-a57c-4442-a6b9-ffb3fce130b7 852 0 2026-03-10 02:07:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-dd9lc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ed5e8e0f22 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.367 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.427 [INFO][4790] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" HandleID="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Workload="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.442 [INFO][4790] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" HandleID="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Workload="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f9890), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-dd9lc", "timestamp":"2026-03-10 02:07:58.427607554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001def20)} Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.443 [INFO][4790] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.531 [INFO][4790] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.532 [INFO][4790] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.537 [INFO][4790] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.560 [INFO][4790] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.579 [INFO][4790] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.582 [INFO][4790] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.585 [INFO][4790] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.586 [INFO][4790] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.588 [INFO][4790] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.593 [INFO][4790] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.601 [INFO][4790] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.601 [INFO][4790] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" host="localhost" Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.601 [INFO][4790] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:58.655096 containerd[1559]: 2026-03-10 02:07:58.601 [INFO][4790] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" HandleID="k8s-pod-network.47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Workload="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.655556 containerd[1559]: 2026-03-10 02:07:58.604 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dd9lc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"95ea2f0e-a57c-4442-a6b9-ffb3fce130b7", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-dd9lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed5e8e0f22", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:58.655556 containerd[1559]: 2026-03-10 02:07:58.604 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.655556 containerd[1559]: 2026-03-10 02:07:58.604 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ed5e8e0f22 ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.655556 containerd[1559]: 2026-03-10 02:07:58.620 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.655556 containerd[1559]: 2026-03-10 02:07:58.624 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--dd9lc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"95ea2f0e-a57c-4442-a6b9-ffb3fce130b7", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a", Pod:"coredns-66bc5c9577-dd9lc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ed5e8e0f22", MAC:"56:2b:69:c3:e7:4f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:58.655556 containerd[1559]: 2026-03-10 02:07:58.650 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" Namespace="kube-system" Pod="coredns-66bc5c9577-dd9lc" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--dd9lc-eth0" Mar 10 02:07:58.660855 containerd[1559]: time="2026-03-10T02:07:58.660237968Z" level=info msg="connecting to shim 927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e" address="unix:///run/containerd/s/fca3306ae62ec84093e07f0a7c630fb1ce32f830bc089fb94c0dc8a6eb6f1f1b" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:58.660526 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. Mar 10 02:07:58.661378 systemd[1]: sshd@7-10.0.0.149:22-10.0.0.1:53108.service: Deactivated successfully. Mar 10 02:07:58.664909 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 02:07:58.667539 systemd-logind[1546]: Removed session 8. Mar 10 02:07:58.700262 systemd[1]: Started cri-containerd-927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e.scope - libcontainer container 927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e. Mar 10 02:07:58.705810 containerd[1559]: time="2026-03-10T02:07:58.705709123Z" level=info msg="connecting to shim 47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a" address="unix:///run/containerd/s/ac33604b89f90f380705847f42aa09b35b2bc2593d23a76ab5e1e32d1772926c" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:58.727681 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:58.748269 systemd[1]: Started cri-containerd-47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a.scope - libcontainer container 47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a. Mar 10 02:07:58.769762 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:58.775519 containerd[1559]: time="2026-03-10T02:07:58.775466822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rqpzt,Uid:c09a44c4-f571-4334-8394-76c9c0007ea5,Namespace:kube-system,Attempt:0,} returns sandbox id \"927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e\"" Mar 10 02:07:58.784555 containerd[1559]: time="2026-03-10T02:07:58.784497340Z" level=info msg="CreateContainer within sandbox \"927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 02:07:58.807961 containerd[1559]: time="2026-03-10T02:07:58.807906499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dd9lc,Uid:95ea2f0e-a57c-4442-a6b9-ffb3fce130b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a\"" Mar 10 02:07:58.814752 containerd[1559]: time="2026-03-10T02:07:58.814703994Z" level=info msg="CreateContainer within sandbox \"47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 02:07:58.820983 containerd[1559]: time="2026-03-10T02:07:58.820925103Z" level=info msg="Container fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:58.824229 containerd[1559]: time="2026-03-10T02:07:58.823829207Z" level=info msg="Container 19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:58.831515 containerd[1559]: time="2026-03-10T02:07:58.831420348Z" level=info msg="CreateContainer within sandbox \"927c9ef8b863f25006e1ba8e4b64a00b5b5c4c77350478c57822d59e1c4fa79e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08\"" Mar 10 02:07:58.832867 containerd[1559]: time="2026-03-10T02:07:58.831993542Z" level=info msg="StartContainer for \"fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08\"" Mar 10 02:07:58.835201 containerd[1559]: time="2026-03-10T02:07:58.835151509Z" level=info msg="connecting to shim fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08" address="unix:///run/containerd/s/fca3306ae62ec84093e07f0a7c630fb1ce32f830bc089fb94c0dc8a6eb6f1f1b" protocol=ttrpc version=3 Mar 10 02:07:58.837549 containerd[1559]: time="2026-03-10T02:07:58.837518743Z" level=info msg="CreateContainer within sandbox \"47e1f724a06f46d76be6babcbc9ecbfca38f64279ee70939093e813d9ac8b72a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5\"" Mar 10 02:07:58.838124 containerd[1559]: time="2026-03-10T02:07:58.838039604Z" level=info msg="StartContainer for \"19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5\"" Mar 10 02:07:58.839477 containerd[1559]: time="2026-03-10T02:07:58.839446698Z" level=info msg="connecting to shim 19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5" address="unix:///run/containerd/s/ac33604b89f90f380705847f42aa09b35b2bc2593d23a76ab5e1e32d1772926c" protocol=ttrpc version=3 Mar 10 02:07:58.872228 systemd[1]: Started cri-containerd-19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5.scope - libcontainer container 19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5. Mar 10 02:07:58.874191 systemd[1]: Started cri-containerd-fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08.scope - libcontainer container fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08. Mar 10 02:07:58.916618 systemd-networkd[1459]: cali2c01a54ba32: Gained IPv6LL Mar 10 02:07:58.923678 containerd[1559]: time="2026-03-10T02:07:58.923624293Z" level=info msg="StartContainer for \"19581abf04dc9c2575a0c80236008d185642f4e25e96a73fa10208bc0fa936d5\" returns successfully" Mar 10 02:07:58.923874 containerd[1559]: time="2026-03-10T02:07:58.923830393Z" level=info msg="StartContainer for \"fe7c034f87bd7c7755412eec2489b9d76865acb986458dc9f9188224c68d9d08\" returns successfully" Mar 10 02:07:59.234569 containerd[1559]: time="2026-03-10T02:07:59.234523241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-s4z2d,Uid:3804cfdb-f007-453a-9310-c14ad48665c1,Namespace:calico-system,Attempt:0,}" Mar 10 02:07:59.377305 systemd-networkd[1459]: calib63eabff20f: Link UP Mar 10 02:07:59.379263 systemd-networkd[1459]: calib63eabff20f: Gained carrier Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.286 [INFO][5061] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0 calico-apiserver-6767cf97dd- calico-system 3804cfdb-f007-453a-9310-c14ad48665c1 857 0 2026-03-10 02:07:30 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6767cf97dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6767cf97dd-s4z2d eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib63eabff20f [] [] }} ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.287 [INFO][5061] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.326 [INFO][5076] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" HandleID="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Workload="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.334 [INFO][5076] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" HandleID="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Workload="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6767cf97dd-s4z2d", "timestamp":"2026-03-10 02:07:59.326130283 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000424000)} Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.335 [INFO][5076] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.335 [INFO][5076] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.335 [INFO][5076] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.337 [INFO][5076] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.342 [INFO][5076] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.348 [INFO][5076] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.350 [INFO][5076] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.353 [INFO][5076] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.353 [INFO][5076] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.355 [INFO][5076] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957 Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.360 [INFO][5076] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.367 [INFO][5076] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.367 [INFO][5076] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" host="localhost" Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.367 [INFO][5076] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 02:07:59.398772 containerd[1559]: 2026-03-10 02:07:59.367 [INFO][5076] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" HandleID="k8s-pod-network.8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Workload="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.400450 containerd[1559]: 2026-03-10 02:07:59.371 [INFO][5061] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0", GenerateName:"calico-apiserver-6767cf97dd-", Namespace:"calico-system", SelfLink:"", UID:"3804cfdb-f007-453a-9310-c14ad48665c1", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6767cf97dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6767cf97dd-s4z2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib63eabff20f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:59.400450 containerd[1559]: 2026-03-10 02:07:59.371 [INFO][5061] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.400450 containerd[1559]: 2026-03-10 02:07:59.371 [INFO][5061] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib63eabff20f ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.400450 containerd[1559]: 2026-03-10 02:07:59.380 [INFO][5061] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.400450 containerd[1559]: 2026-03-10 02:07:59.381 [INFO][5061] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0", GenerateName:"calico-apiserver-6767cf97dd-", Namespace:"calico-system", SelfLink:"", UID:"3804cfdb-f007-453a-9310-c14ad48665c1", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 2, 7, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6767cf97dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957", Pod:"calico-apiserver-6767cf97dd-s4z2d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib63eabff20f", MAC:"0e:51:d1:44:cf:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 02:07:59.400450 containerd[1559]: 2026-03-10 02:07:59.393 [INFO][5061] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" Namespace="calico-system" Pod="calico-apiserver-6767cf97dd-s4z2d" WorkloadEndpoint="localhost-k8s-calico--apiserver--6767cf97dd--s4z2d-eth0" Mar 10 02:07:59.414016 kubelet[2717]: I0310 02:07:59.413899 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dd9lc" podStartSLOduration=39.413881888 podStartE2EDuration="39.413881888s" podCreationTimestamp="2026-03-10 02:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:07:59.413198419 +0000 UTC m=+44.317432430" watchObservedRunningTime="2026-03-10 02:07:59.413881888 +0000 UTC m=+44.318115909" Mar 10 02:07:59.433261 kubelet[2717]: I0310 02:07:59.429449 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rqpzt" podStartSLOduration=39.429433107 podStartE2EDuration="39.429433107s" podCreationTimestamp="2026-03-10 02:07:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:07:59.428564116 +0000 UTC m=+44.332798125" watchObservedRunningTime="2026-03-10 02:07:59.429433107 +0000 UTC m=+44.333667117" Mar 10 02:07:59.452046 kubelet[2717]: I0310 02:07:59.451940 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6767cf97dd-rcch5" podStartSLOduration=27.767370088 podStartE2EDuration="29.451925206s" podCreationTimestamp="2026-03-10 02:07:30 +0000 UTC" firstStartedPulling="2026-03-10 02:07:56.623081375 +0000 UTC m=+41.527315385" lastFinishedPulling="2026-03-10 02:07:58.307636494 +0000 UTC m=+43.211870503" observedRunningTime="2026-03-10 02:07:59.44636631 +0000 UTC m=+44.350600320" watchObservedRunningTime="2026-03-10 02:07:59.451925206 +0000 UTC m=+44.356159216" Mar 10 02:07:59.453993 containerd[1559]: time="2026-03-10T02:07:59.452468718Z" level=info msg="connecting to shim 8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957" address="unix:///run/containerd/s/6f9d4e568d0e3153678586af157645e5cfada92138c6cf6803d4fb6303251709" namespace=k8s.io protocol=ttrpc version=3 Mar 10 02:07:59.505604 systemd[1]: Started cri-containerd-8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957.scope - libcontainer container 8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957. Mar 10 02:07:59.529266 systemd-resolved[1460]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 02:07:59.570531 containerd[1559]: time="2026-03-10T02:07:59.570489716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6767cf97dd-s4z2d,Uid:3804cfdb-f007-453a-9310-c14ad48665c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957\"" Mar 10 02:07:59.577331 containerd[1559]: time="2026-03-10T02:07:59.577245488Z" level=info msg="CreateContainer within sandbox \"8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 02:07:59.586312 containerd[1559]: time="2026-03-10T02:07:59.586291316Z" level=info msg="Container a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:07:59.598252 containerd[1559]: time="2026-03-10T02:07:59.598200354Z" level=info msg="CreateContainer within sandbox \"8c7d09e5bd504c90eacd31042289db30eb6facc7ec0b4e2a3e4caa0fd8fd5957\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9\"" Mar 10 02:07:59.599605 containerd[1559]: time="2026-03-10T02:07:59.599118656Z" level=info msg="StartContainer for \"a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9\"" Mar 10 02:07:59.600652 containerd[1559]: time="2026-03-10T02:07:59.600585532Z" level=info msg="connecting to shim a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9" address="unix:///run/containerd/s/6f9d4e568d0e3153678586af157645e5cfada92138c6cf6803d4fb6303251709" protocol=ttrpc version=3 Mar 10 02:07:59.634298 systemd[1]: Started cri-containerd-a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9.scope - libcontainer container a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9. Mar 10 02:07:59.692641 containerd[1559]: time="2026-03-10T02:07:59.692479006Z" level=info msg="StartContainer for \"a14b5609226f86c00d738bd18cb0c212584f9f8c23cb8c08d0287964e8e281a9\" returns successfully" Mar 10 02:08:00.433333 kubelet[2717]: I0310 02:08:00.433162 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:08:00.515250 systemd-networkd[1459]: cali2ed5e8e0f22: Gained IPv6LL Mar 10 02:08:00.581544 systemd-networkd[1459]: cali92d008306ca: Gained IPv6LL Mar 10 02:08:00.831693 kubelet[2717]: I0310 02:08:00.831425 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6767cf97dd-s4z2d" podStartSLOduration=30.831412783 podStartE2EDuration="30.831412783s" podCreationTimestamp="2026-03-10 02:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 02:08:00.44766061 +0000 UTC m=+45.351894621" watchObservedRunningTime="2026-03-10 02:08:00.831412783 +0000 UTC m=+45.735646793" Mar 10 02:08:01.132882 containerd[1559]: time="2026-03-10T02:08:01.132765587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:08:01.133918 containerd[1559]: time="2026-03-10T02:08:01.133798693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 10 02:08:01.135235 containerd[1559]: time="2026-03-10T02:08:01.135198736Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:08:01.138338 containerd[1559]: time="2026-03-10T02:08:01.138247630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:08:01.139339 containerd[1559]: time="2026-03-10T02:08:01.139269117Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.831227175s" Mar 10 02:08:01.139339 containerd[1559]: time="2026-03-10T02:08:01.139320774Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 10 02:08:01.140880 containerd[1559]: time="2026-03-10T02:08:01.140751209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 02:08:01.153322 containerd[1559]: time="2026-03-10T02:08:01.153292007Z" level=info msg="CreateContainer within sandbox \"73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 02:08:01.163490 containerd[1559]: time="2026-03-10T02:08:01.163206176Z" level=info msg="Container f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:08:01.185797 containerd[1559]: time="2026-03-10T02:08:01.185728205Z" level=info msg="CreateContainer within sandbox \"73d5a584c53f53d50b437cd8995cebaa525f14e2475074da3c59ea5c13649159\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e\"" Mar 10 02:08:01.186407 containerd[1559]: time="2026-03-10T02:08:01.186343453Z" level=info msg="StartContainer for \"f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e\"" Mar 10 02:08:01.187643 containerd[1559]: time="2026-03-10T02:08:01.187362904Z" level=info msg="connecting to shim f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e" address="unix:///run/containerd/s/4ab10e232b7dfdd7d1eaa9c1bc5947fc9931e9c67d7dda7e0659e24a1dd3927d" protocol=ttrpc version=3 Mar 10 02:08:01.249395 systemd[1]: Started cri-containerd-f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e.scope - libcontainer container f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e. Mar 10 02:08:01.284286 systemd-networkd[1459]: calib63eabff20f: Gained IPv6LL Mar 10 02:08:01.407711 containerd[1559]: time="2026-03-10T02:08:01.407636706Z" level=info msg="StartContainer for \"f5fb34e2bf8ae82f93334992100ed96e5a851f6e9848e7d749c20081192d4b8e\" returns successfully" Mar 10 02:08:01.451409 kubelet[2717]: I0310 02:08:01.450793 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-589bb4f5c4-8wzb9" podStartSLOduration=26.002131415 podStartE2EDuration="30.450779061s" podCreationTimestamp="2026-03-10 02:07:31 +0000 UTC" firstStartedPulling="2026-03-10 02:07:56.691586251 +0000 UTC m=+41.595820260" lastFinishedPulling="2026-03-10 02:08:01.140233896 +0000 UTC m=+46.044467906" observedRunningTime="2026-03-10 02:08:01.449779376 +0000 UTC m=+46.354013385" watchObservedRunningTime="2026-03-10 02:08:01.450779061 +0000 UTC m=+46.355013071" Mar 10 02:08:02.349664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2881953580.mount: Deactivated successfully. Mar 10 02:08:02.745487 containerd[1559]: time="2026-03-10T02:08:02.745363169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:08:02.746225 containerd[1559]: time="2026-03-10T02:08:02.746179418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 10 02:08:02.747695 containerd[1559]: time="2026-03-10T02:08:02.747630419Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:08:02.752484 containerd[1559]: time="2026-03-10T02:08:02.752417655Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 02:08:02.753037 containerd[1559]: time="2026-03-10T02:08:02.752981226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.612130502s" Mar 10 02:08:02.753037 containerd[1559]: time="2026-03-10T02:08:02.753022043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 10 02:08:02.758267 containerd[1559]: time="2026-03-10T02:08:02.758224455Z" level=info msg="CreateContainer within sandbox \"e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 02:08:02.765441 containerd[1559]: time="2026-03-10T02:08:02.765375171Z" level=info msg="Container 2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76: CDI devices from CRI Config.CDIDevices: []" Mar 10 02:08:02.774366 containerd[1559]: time="2026-03-10T02:08:02.774293955Z" level=info msg="CreateContainer within sandbox \"e7756a57d0cfc5e2461387cf73ad83e403cb896d0acb58572a5e1613b9b15960\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76\"" Mar 10 02:08:02.774868 containerd[1559]: time="2026-03-10T02:08:02.774844851Z" level=info msg="StartContainer for \"2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76\"" Mar 10 02:08:02.775915 containerd[1559]: time="2026-03-10T02:08:02.775818047Z" level=info msg="connecting to shim 2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76" address="unix:///run/containerd/s/e046a536e0470a53e6c5d4cb7794e2f0af6e116289dae557b8aa16c2ca996114" protocol=ttrpc version=3 Mar 10 02:08:02.800229 systemd[1]: Started cri-containerd-2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76.scope - libcontainer container 2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76. Mar 10 02:08:02.863262 containerd[1559]: time="2026-03-10T02:08:02.863162378Z" level=info msg="StartContainer for \"2f484d0c0a56124a2f3b85c129a05ca3f8674f257a4eb69a62df200049f6cd76\" returns successfully" Mar 10 02:08:03.458156 kubelet[2717]: I0310 02:08:03.458009 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-c9bgq" podStartSLOduration=28.262514254 podStartE2EDuration="33.457992405s" podCreationTimestamp="2026-03-10 02:07:30 +0000 UTC" firstStartedPulling="2026-03-10 02:07:57.558428291 +0000 UTC m=+42.462662301" lastFinishedPulling="2026-03-10 02:08:02.753906432 +0000 UTC m=+47.658140452" observedRunningTime="2026-03-10 02:08:03.457291408 +0000 UTC m=+48.361525418" watchObservedRunningTime="2026-03-10 02:08:03.457992405 +0000 UTC m=+48.362226416" Mar 10 02:08:03.674422 systemd[1]: Started sshd@8-10.0.0.149:22-10.0.0.1:38232.service - OpenSSH per-connection server daemon (10.0.0.1:38232). Mar 10 02:08:03.748725 sshd[5336]: Accepted publickey for core from 10.0.0.1 port 38232 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:03.750576 sshd-session[5336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:03.755608 systemd-logind[1546]: New session 9 of user core. Mar 10 02:08:03.765247 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 02:08:03.925701 sshd[5339]: Connection closed by 10.0.0.1 port 38232 Mar 10 02:08:03.926015 sshd-session[5336]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:03.929849 systemd[1]: sshd@8-10.0.0.149:22-10.0.0.1:38232.service: Deactivated successfully. Mar 10 02:08:03.931674 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 02:08:03.932593 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. Mar 10 02:08:03.933865 systemd-logind[1546]: Removed session 9. Mar 10 02:08:08.939363 systemd[1]: Started sshd@9-10.0.0.149:22-10.0.0.1:59138.service - OpenSSH per-connection server daemon (10.0.0.1:59138). Mar 10 02:08:09.005421 sshd[5472]: Accepted publickey for core from 10.0.0.1 port 59138 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:09.006719 sshd-session[5472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:09.011032 systemd-logind[1546]: New session 10 of user core. Mar 10 02:08:09.021229 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 02:08:09.111350 sshd[5475]: Connection closed by 10.0.0.1 port 59138 Mar 10 02:08:09.111714 sshd-session[5472]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:09.119427 systemd[1]: sshd@9-10.0.0.149:22-10.0.0.1:59138.service: Deactivated successfully. Mar 10 02:08:09.121255 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 02:08:09.122322 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. Mar 10 02:08:09.124614 systemd[1]: Started sshd@10-10.0.0.149:22-10.0.0.1:59152.service - OpenSSH per-connection server daemon (10.0.0.1:59152). Mar 10 02:08:09.125753 systemd-logind[1546]: Removed session 10. Mar 10 02:08:09.189383 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 59152 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:09.190730 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:09.195698 systemd-logind[1546]: New session 11 of user core. Mar 10 02:08:09.203315 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 02:08:09.303661 sshd[5493]: Connection closed by 10.0.0.1 port 59152 Mar 10 02:08:09.304435 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:09.312018 systemd[1]: sshd@10-10.0.0.149:22-10.0.0.1:59152.service: Deactivated successfully. Mar 10 02:08:09.314715 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 02:08:09.315908 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. Mar 10 02:08:09.322556 systemd[1]: Started sshd@11-10.0.0.149:22-10.0.0.1:59158.service - OpenSSH per-connection server daemon (10.0.0.1:59158). Mar 10 02:08:09.325328 systemd-logind[1546]: Removed session 11. Mar 10 02:08:09.368804 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:09.370157 sshd-session[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:09.374999 systemd-logind[1546]: New session 12 of user core. Mar 10 02:08:09.387283 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 02:08:09.465982 sshd[5507]: Connection closed by 10.0.0.1 port 59158 Mar 10 02:08:09.467289 sshd-session[5504]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:09.471320 systemd[1]: sshd@11-10.0.0.149:22-10.0.0.1:59158.service: Deactivated successfully. Mar 10 02:08:09.473382 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 02:08:09.474240 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. Mar 10 02:08:09.475405 systemd-logind[1546]: Removed session 12. Mar 10 02:08:14.485624 systemd[1]: Started sshd@12-10.0.0.149:22-10.0.0.1:59166.service - OpenSSH per-connection server daemon (10.0.0.1:59166). Mar 10 02:08:14.567581 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 59166 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:14.569542 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:14.574523 systemd-logind[1546]: New session 13 of user core. Mar 10 02:08:14.580247 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 02:08:14.692536 sshd[5569]: Connection closed by 10.0.0.1 port 59166 Mar 10 02:08:14.693042 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:14.702412 systemd[1]: sshd@12-10.0.0.149:22-10.0.0.1:59166.service: Deactivated successfully. Mar 10 02:08:14.704039 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 02:08:14.704987 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. Mar 10 02:08:14.707220 systemd[1]: Started sshd@13-10.0.0.149:22-10.0.0.1:59180.service - OpenSSH per-connection server daemon (10.0.0.1:59180). Mar 10 02:08:14.708576 systemd-logind[1546]: Removed session 13. Mar 10 02:08:14.763955 sshd[5583]: Accepted publickey for core from 10.0.0.1 port 59180 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:14.765563 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:14.770685 systemd-logind[1546]: New session 14 of user core. Mar 10 02:08:14.780258 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 02:08:14.982296 sshd[5586]: Connection closed by 10.0.0.1 port 59180 Mar 10 02:08:14.982765 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:14.993512 systemd[1]: sshd@13-10.0.0.149:22-10.0.0.1:59180.service: Deactivated successfully. Mar 10 02:08:14.995647 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 02:08:14.996911 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. Mar 10 02:08:15.000037 systemd[1]: Started sshd@14-10.0.0.149:22-10.0.0.1:59194.service - OpenSSH per-connection server daemon (10.0.0.1:59194). Mar 10 02:08:15.000676 systemd-logind[1546]: Removed session 14. Mar 10 02:08:15.064925 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 59194 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:15.066706 sshd-session[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:15.071894 systemd-logind[1546]: New session 15 of user core. Mar 10 02:08:15.077288 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 02:08:15.526149 sshd[5601]: Connection closed by 10.0.0.1 port 59194 Mar 10 02:08:15.525101 sshd-session[5598]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:15.537891 systemd[1]: sshd@14-10.0.0.149:22-10.0.0.1:59194.service: Deactivated successfully. Mar 10 02:08:15.540466 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 02:08:15.543695 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. Mar 10 02:08:15.548321 systemd[1]: Started sshd@15-10.0.0.149:22-10.0.0.1:59208.service - OpenSSH per-connection server daemon (10.0.0.1:59208). Mar 10 02:08:15.550833 systemd-logind[1546]: Removed session 15. Mar 10 02:08:15.624657 sshd[5628]: Accepted publickey for core from 10.0.0.1 port 59208 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:15.626417 sshd-session[5628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:15.631838 systemd-logind[1546]: New session 16 of user core. Mar 10 02:08:15.641234 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 02:08:15.924278 sshd[5631]: Connection closed by 10.0.0.1 port 59208 Mar 10 02:08:15.924639 sshd-session[5628]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:15.936695 systemd[1]: sshd@15-10.0.0.149:22-10.0.0.1:59208.service: Deactivated successfully. Mar 10 02:08:15.939542 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 02:08:15.942592 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. Mar 10 02:08:15.945521 systemd[1]: Started sshd@16-10.0.0.149:22-10.0.0.1:59218.service - OpenSSH per-connection server daemon (10.0.0.1:59218). Mar 10 02:08:15.947424 systemd-logind[1546]: Removed session 16. Mar 10 02:08:15.999371 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 59218 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:16.000797 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:16.006199 systemd-logind[1546]: New session 17 of user core. Mar 10 02:08:16.015244 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 02:08:16.100002 sshd[5646]: Connection closed by 10.0.0.1 port 59218 Mar 10 02:08:16.100524 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:16.105324 systemd[1]: sshd@16-10.0.0.149:22-10.0.0.1:59218.service: Deactivated successfully. Mar 10 02:08:16.107935 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 02:08:16.108965 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. Mar 10 02:08:16.111204 systemd-logind[1546]: Removed session 17. Mar 10 02:08:17.449950 kubelet[2717]: I0310 02:08:17.449893 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 10 02:08:21.112765 systemd[1]: Started sshd@17-10.0.0.149:22-10.0.0.1:42554.service - OpenSSH per-connection server daemon (10.0.0.1:42554). Mar 10 02:08:21.168556 sshd[5696]: Accepted publickey for core from 10.0.0.1 port 42554 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:21.169893 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:21.174601 systemd-logind[1546]: New session 18 of user core. Mar 10 02:08:21.184260 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 02:08:21.261863 sshd[5699]: Connection closed by 10.0.0.1 port 42554 Mar 10 02:08:21.262241 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:21.266427 systemd[1]: sshd@17-10.0.0.149:22-10.0.0.1:42554.service: Deactivated successfully. Mar 10 02:08:21.268653 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 02:08:21.269716 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. Mar 10 02:08:21.271202 systemd-logind[1546]: Removed session 18. Mar 10 02:08:26.281464 systemd[1]: Started sshd@18-10.0.0.149:22-10.0.0.1:42556.service - OpenSSH per-connection server daemon (10.0.0.1:42556). Mar 10 02:08:26.376096 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 42556 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 02:08:26.377993 sshd-session[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 02:08:26.383378 systemd-logind[1546]: New session 19 of user core. Mar 10 02:08:26.393286 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 02:08:26.529483 sshd[5745]: Connection closed by 10.0.0.1 port 42556 Mar 10 02:08:26.529385 sshd-session[5742]: pam_unix(sshd:session): session closed for user core Mar 10 02:08:26.535236 systemd[1]: sshd@18-10.0.0.149:22-10.0.0.1:42556.service: Deactivated successfully. Mar 10 02:08:26.538765 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 02:08:26.541853 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. Mar 10 02:08:26.544306 systemd-logind[1546]: Removed session 19.