Mar 13 00:39:02.099982 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:39:02.100002 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:39:02.100014 kernel: BIOS-provided physical RAM map: Mar 13 00:39:02.100020 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 13 00:39:02.100026 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 13 00:39:02.100032 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 13 00:39:02.100039 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 13 00:39:02.100045 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 13 00:39:02.100076 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 13 00:39:02.100082 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 13 00:39:02.100088 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 13 00:39:02.100098 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 13 00:39:02.100107 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 13 00:39:02.100118 kernel: NX (Execute Disable) protection: active Mar 13 00:39:02.100130 kernel: APIC: Static calls initialized Mar 13 00:39:02.100141 kernel: SMBIOS 2.8 present. Mar 13 00:39:02.100190 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 13 00:39:02.100199 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:39:02.100205 kernel: Hypervisor detected: KVM Mar 13 00:39:02.100211 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:39:02.100218 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:39:02.100224 kernel: kvm-clock: using sched offset of 9773872825 cycles Mar 13 00:39:02.100231 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:39:02.100238 kernel: tsc: Detected 2445.426 MHz processor Mar 13 00:39:02.100244 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:39:02.100251 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:39:02.100262 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 13 00:39:02.100268 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 13 00:39:02.100275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:39:02.100281 kernel: Using GB pages for direct mapping Mar 13 00:39:02.100288 kernel: ACPI: Early table checksum verification disabled Mar 13 00:39:02.100294 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 13 00:39:02.100301 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100308 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100314 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100323 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 13 00:39:02.100330 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100337 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100343 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100350 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 13 00:39:02.100360 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 13 00:39:02.100369 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 13 00:39:02.100376 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 13 00:39:02.100383 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 13 00:39:02.100393 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 13 00:39:02.100465 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 13 00:39:02.100482 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 13 00:39:02.100494 kernel: No NUMA configuration found Mar 13 00:39:02.100506 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 13 00:39:02.100523 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 13 00:39:02.100530 kernel: Zone ranges: Mar 13 00:39:02.100537 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:39:02.100544 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 13 00:39:02.100551 kernel: Normal empty Mar 13 00:39:02.100557 kernel: Device empty Mar 13 00:39:02.100564 kernel: Movable zone start for each node Mar 13 00:39:02.100571 kernel: Early memory node ranges Mar 13 00:39:02.100577 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 13 00:39:02.100584 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 13 00:39:02.100593 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 13 00:39:02.100600 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:39:02.100607 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 13 00:39:02.100637 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 13 00:39:02.100644 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 13 00:39:02.100651 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:39:02.100658 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:39:02.100665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 13 00:39:02.100701 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:39:02.100720 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:39:02.100732 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:39:02.100743 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:39:02.100755 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:39:02.100767 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 13 00:39:02.100777 kernel: TSC deadline timer available Mar 13 00:39:02.100784 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:39:02.100791 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:39:02.100797 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:39:02.100808 kernel: CPU topo: Max. threads per core: 1 Mar 13 00:39:02.100815 kernel: CPU topo: Num. cores per package: 4 Mar 13 00:39:02.100821 kernel: CPU topo: Num. threads per package: 4 Mar 13 00:39:02.100828 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 13 00:39:02.100835 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 13 00:39:02.100841 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 13 00:39:02.100879 kernel: kvm-guest: setup PV sched yield Mar 13 00:39:02.100886 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 13 00:39:02.100893 kernel: Booting paravirtualized kernel on KVM Mar 13 00:39:02.100901 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:39:02.100911 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 13 00:39:02.100918 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 13 00:39:02.100924 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 13 00:39:02.100931 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 13 00:39:02.100938 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:39:02.100944 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:39:02.100952 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:39:02.100959 kernel: random: crng init done Mar 13 00:39:02.100968 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:39:02.100975 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 13 00:39:02.100982 kernel: Fallback order for Node 0: 0 Mar 13 00:39:02.100989 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 13 00:39:02.100995 kernel: Policy zone: DMA32 Mar 13 00:39:02.101002 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:39:02.101009 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 13 00:39:02.101016 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:39:02.101023 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:39:02.101032 kernel: Dynamic Preempt: voluntary Mar 13 00:39:02.101038 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:39:02.101050 kernel: rcu: RCU event tracing is enabled. Mar 13 00:39:02.101057 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 13 00:39:02.101064 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:39:02.101090 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:39:02.101097 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:39:02.101104 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:39:02.101110 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 13 00:39:02.101120 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:39:02.101127 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:39:02.101134 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 13 00:39:02.101146 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 13 00:39:02.101158 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:39:02.101183 kernel: Console: colour VGA+ 80x25 Mar 13 00:39:02.101199 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:39:02.101213 kernel: ACPI: Core revision 20240827 Mar 13 00:39:02.101225 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 13 00:39:02.101237 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:39:02.101249 kernel: x2apic enabled Mar 13 00:39:02.101257 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:39:02.101292 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 13 00:39:02.101300 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 13 00:39:02.101307 kernel: kvm-guest: setup PV IPIs Mar 13 00:39:02.101314 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 13 00:39:02.101321 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:39:02.101332 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 13 00:39:02.101339 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 13 00:39:02.101346 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 13 00:39:02.101353 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 13 00:39:02.101360 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:39:02.101367 kernel: Spectre V2 : Mitigation: Retpolines Mar 13 00:39:02.101374 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:39:02.101381 kernel: Speculative Store Bypass: Vulnerable Mar 13 00:39:02.101388 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 13 00:39:02.101398 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 13 00:39:02.101477 kernel: active return thunk: srso_alias_return_thunk Mar 13 00:39:02.101485 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 13 00:39:02.101492 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 13 00:39:02.101500 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:39:02.101507 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:39:02.101514 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:39:02.101521 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:39:02.101532 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:39:02.101539 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 13 00:39:02.101546 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:39:02.101553 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:39:02.101560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:39:02.101567 kernel: landlock: Up and running. Mar 13 00:39:02.101574 kernel: SELinux: Initializing. Mar 13 00:39:02.101581 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:39:02.101588 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 13 00:39:02.101616 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 13 00:39:02.101623 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 13 00:39:02.101630 kernel: signal: max sigframe size: 1776 Mar 13 00:39:02.101637 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:39:02.101645 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:39:02.101652 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:39:02.101659 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:39:02.101666 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:39:02.101673 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:39:02.101683 kernel: .... node #0, CPUs: #1 #2 #3 Mar 13 00:39:02.101695 kernel: smp: Brought up 1 node, 4 CPUs Mar 13 00:39:02.101708 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 13 00:39:02.101721 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 13 00:39:02.101733 kernel: devtmpfs: initialized Mar 13 00:39:02.101746 kernel: x86/mm: Memory block size: 128MB Mar 13 00:39:02.101754 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:39:02.101762 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 13 00:39:02.101772 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:39:02.101790 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:39:02.101802 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:39:02.101815 kernel: audit: type=2000 audit(1773362337.465:1): state=initialized audit_enabled=0 res=1 Mar 13 00:39:02.101822 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:39:02.101829 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:39:02.101836 kernel: cpuidle: using governor menu Mar 13 00:39:02.101875 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:39:02.101884 kernel: dca service started, version 1.12.1 Mar 13 00:39:02.101892 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 13 00:39:02.101903 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 13 00:39:02.101910 kernel: PCI: Using configuration type 1 for base access Mar 13 00:39:02.101917 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:39:02.101924 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:39:02.101931 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:39:02.101938 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:39:02.101946 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:39:02.101960 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:39:02.101972 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:39:02.101989 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:39:02.102002 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 13 00:39:02.102014 kernel: ACPI: Interpreter enabled Mar 13 00:39:02.102026 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:39:02.102033 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:39:02.102042 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:39:02.102055 kernel: PCI: Using E820 reservations for host bridge windows Mar 13 00:39:02.102067 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 13 00:39:02.102080 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:39:02.102310 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:39:02.102614 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 13 00:39:02.102785 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 13 00:39:02.102797 kernel: PCI host bridge to bus 0000:00 Mar 13 00:39:02.103026 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:39:02.103164 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:39:02.103340 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:39:02.103573 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 13 00:39:02.103747 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 13 00:39:02.103922 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 13 00:39:02.104058 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:39:02.104261 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:39:02.104538 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 13 00:39:02.104695 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 13 00:39:02.104834 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 13 00:39:02.105056 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 13 00:39:02.105200 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 13 00:39:02.105390 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:39:02.105595 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 13 00:39:02.105789 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 13 00:39:02.105973 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 13 00:39:02.106163 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 13 00:39:02.106316 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 13 00:39:02.106552 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 13 00:39:02.106740 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 13 00:39:02.106979 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:39:02.107168 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 13 00:39:02.107344 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 13 00:39:02.107575 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 13 00:39:02.107796 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 13 00:39:02.108030 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:39:02.108210 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 13 00:39:02.108458 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 13 00:39:02.108631 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 13 00:39:02.108839 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 13 00:39:02.109099 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 13 00:39:02.109269 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 13 00:39:02.109281 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:39:02.109289 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:39:02.109296 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:39:02.109308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:39:02.109315 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 13 00:39:02.109322 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 13 00:39:02.109329 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 13 00:39:02.109336 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 13 00:39:02.109343 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 13 00:39:02.109350 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 13 00:39:02.109357 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 13 00:39:02.109364 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 13 00:39:02.109374 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 13 00:39:02.109381 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 13 00:39:02.109388 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 13 00:39:02.109395 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 13 00:39:02.109402 kernel: iommu: Default domain type: Translated Mar 13 00:39:02.109463 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:39:02.109470 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:39:02.109477 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:39:02.109484 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 13 00:39:02.109495 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 13 00:39:02.109670 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 13 00:39:02.109837 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 13 00:39:02.110061 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 13 00:39:02.110075 kernel: vgaarb: loaded Mar 13 00:39:02.110082 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 13 00:39:02.110090 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 13 00:39:02.110097 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:39:02.110109 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:39:02.110116 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:39:02.110123 kernel: pnp: PnP ACPI init Mar 13 00:39:02.110279 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 13 00:39:02.110289 kernel: pnp: PnP ACPI: found 6 devices Mar 13 00:39:02.110297 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:39:02.110304 kernel: NET: Registered PF_INET protocol family Mar 13 00:39:02.110311 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:39:02.110318 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 13 00:39:02.110329 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:39:02.110336 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 13 00:39:02.110343 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 13 00:39:02.110350 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 13 00:39:02.110357 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:39:02.110364 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 13 00:39:02.110371 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:39:02.110378 kernel: NET: Registered PF_XDP protocol family Mar 13 00:39:02.110576 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:39:02.110766 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:39:02.111044 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:39:02.111180 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 13 00:39:02.111349 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 13 00:39:02.111538 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 13 00:39:02.111550 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:39:02.111558 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 13 00:39:02.111565 kernel: Initialise system trusted keyrings Mar 13 00:39:02.111578 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 13 00:39:02.111585 kernel: Key type asymmetric registered Mar 13 00:39:02.111592 kernel: Asymmetric key parser 'x509' registered Mar 13 00:39:02.111599 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:39:02.111606 kernel: io scheduler mq-deadline registered Mar 13 00:39:02.111613 kernel: io scheduler kyber registered Mar 13 00:39:02.111621 kernel: io scheduler bfq registered Mar 13 00:39:02.111628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:39:02.111635 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 13 00:39:02.111646 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 13 00:39:02.111655 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 13 00:39:02.111668 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:39:02.111681 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:39:02.111694 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:39:02.111706 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:39:02.111718 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:39:02.111932 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 13 00:39:02.111951 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 13 00:39:02.112092 kernel: rtc_cmos 00:04: registered as rtc0 Mar 13 00:39:02.112278 kernel: rtc_cmos 00:04: setting system clock to 2026-03-13T00:39:01 UTC (1773362341) Mar 13 00:39:02.112525 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 13 00:39:02.112540 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 13 00:39:02.112549 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:39:02.112562 kernel: Segment Routing with IPv6 Mar 13 00:39:02.112574 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:39:02.112586 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:39:02.112606 kernel: Key type dns_resolver registered Mar 13 00:39:02.112619 kernel: IPI shorthand broadcast: enabled Mar 13 00:39:02.112632 kernel: sched_clock: Marking stable (3607019931, 778760489)->(4629583586, -243803166) Mar 13 00:39:02.112644 kernel: registered taskstats version 1 Mar 13 00:39:02.112655 kernel: Loading compiled-in X.509 certificates Mar 13 00:39:02.112662 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:39:02.112669 kernel: Demotion targets for Node 0: null Mar 13 00:39:02.112676 kernel: Key type .fscrypt registered Mar 13 00:39:02.112683 kernel: Key type fscrypt-provisioning registered Mar 13 00:39:02.112694 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 13 00:39:02.112701 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:39:02.112708 kernel: ima: No architecture policies found Mar 13 00:39:02.112715 kernel: clk: Disabling unused clocks Mar 13 00:39:02.112722 kernel: Warning: unable to open an initial console. Mar 13 00:39:02.112729 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:39:02.112736 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:39:02.112743 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:39:02.112753 kernel: Run /init as init process Mar 13 00:39:02.112760 kernel: with arguments: Mar 13 00:39:02.112767 kernel: /init Mar 13 00:39:02.112774 kernel: with environment: Mar 13 00:39:02.112781 kernel: HOME=/ Mar 13 00:39:02.112788 kernel: TERM=linux Mar 13 00:39:02.112796 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:39:02.112806 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:39:02.112817 systemd[1]: Detected virtualization kvm. Mar 13 00:39:02.112824 systemd[1]: Detected architecture x86-64. Mar 13 00:39:02.112832 systemd[1]: Running in initrd. Mar 13 00:39:02.112839 systemd[1]: No hostname configured, using default hostname. Mar 13 00:39:02.112881 systemd[1]: Hostname set to . Mar 13 00:39:02.112889 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:39:02.112896 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:39:02.112904 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:39:02.112926 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:39:02.112938 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:39:02.112953 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:39:02.112966 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:39:02.112981 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:39:02.113003 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:39:02.113018 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:39:02.113031 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:39:02.113045 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:39:02.113055 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:39:02.113063 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:39:02.113070 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:39:02.113078 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:39:02.113089 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:39:02.113097 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:39:02.113105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:39:02.113112 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:39:02.113120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:39:02.113127 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:39:02.113135 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:39:02.113142 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:39:02.113150 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:39:02.113160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:39:02.113168 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:39:02.113176 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:39:02.113183 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:39:02.113191 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:39:02.113204 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:39:02.113218 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:02.113231 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:39:02.113256 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:39:02.113274 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:39:02.113316 systemd-journald[204]: Collecting audit messages is disabled. Mar 13 00:39:02.113334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:39:02.113342 systemd-journald[204]: Journal started Mar 13 00:39:02.113361 systemd-journald[204]: Runtime Journal (/run/log/journal/12c699256157400e9725b7b97354b908) is 6M, max 48.3M, 42.2M free. Mar 13 00:39:02.119666 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:39:02.119696 kernel: Bridge firewalling registered Mar 13 00:39:02.088161 systemd-modules-load[205]: Inserted module 'overlay' Mar 13 00:39:02.255918 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:39:02.255937 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:39:02.118554 systemd-modules-load[205]: Inserted module 'br_netfilter' Mar 13 00:39:02.259145 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:02.265192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:39:02.279042 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:39:02.286261 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:39:02.287158 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:39:02.305327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:39:02.320827 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:39:02.321901 systemd-tmpfiles[225]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:39:02.327228 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:39:02.331110 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:39:02.335154 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:39:02.347290 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:39:02.366282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:39:02.383538 dracut-cmdline[241]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:39:02.425668 systemd-resolved[245]: Positive Trust Anchors: Mar 13 00:39:02.425704 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:39:02.425730 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:39:02.428267 systemd-resolved[245]: Defaulting to hostname 'linux'. Mar 13 00:39:02.429985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:39:02.432918 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:39:02.552472 kernel: SCSI subsystem initialized Mar 13 00:39:02.561483 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:39:02.573487 kernel: iscsi: registered transport (tcp) Mar 13 00:39:02.595905 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:39:02.595992 kernel: QLogic iSCSI HBA Driver Mar 13 00:39:02.623534 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:39:02.649884 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:39:02.655288 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:39:02.719901 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:39:02.724643 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:39:02.793498 kernel: raid6: avx2x4 gen() 28204 MB/s Mar 13 00:39:02.811494 kernel: raid6: avx2x2 gen() 28176 MB/s Mar 13 00:39:02.831071 kernel: raid6: avx2x1 gen() 19872 MB/s Mar 13 00:39:02.831105 kernel: raid6: using algorithm avx2x4 gen() 28204 MB/s Mar 13 00:39:02.850712 kernel: raid6: .... xor() 4310 MB/s, rmw enabled Mar 13 00:39:02.850740 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:39:02.873497 kernel: xor: automatically using best checksumming function avx Mar 13 00:39:03.041502 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:39:03.052283 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:39:03.059094 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:39:03.097593 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 13 00:39:03.104627 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:39:03.113243 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:39:03.148480 dracut-pre-trigger[467]: rd.md=0: removing MD RAID activation Mar 13 00:39:03.189901 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:39:03.197936 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:39:03.313619 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:39:03.325328 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:39:03.385737 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:39:03.392664 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 13 00:39:03.392697 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 13 00:39:03.402470 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 13 00:39:03.430375 kernel: libata version 3.00 loaded. Mar 13 00:39:03.430451 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:39:03.430467 kernel: GPT:9289727 != 19775487 Mar 13 00:39:03.430478 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:39:03.430494 kernel: GPT:9289727 != 19775487 Mar 13 00:39:03.430504 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:39:03.430515 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:39:03.418130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:39:03.418246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:03.438658 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:03.447019 kernel: AES CTR mode by8 optimization enabled Mar 13 00:39:03.444007 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:03.455522 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:39:03.468459 kernel: ahci 0000:00:1f.2: version 3.0 Mar 13 00:39:03.472514 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 13 00:39:03.490699 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 13 00:39:03.490959 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 13 00:39:03.491151 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 13 00:39:03.501474 kernel: scsi host0: ahci Mar 13 00:39:03.502464 kernel: scsi host1: ahci Mar 13 00:39:03.502662 kernel: scsi host2: ahci Mar 13 00:39:03.503490 kernel: scsi host3: ahci Mar 13 00:39:03.505479 kernel: scsi host4: ahci Mar 13 00:39:03.508303 kernel: scsi host5: ahci Mar 13 00:39:03.508576 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 13 00:39:03.508590 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 13 00:39:03.508600 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 13 00:39:03.508668 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 13 00:39:03.508680 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 13 00:39:03.508690 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 13 00:39:03.519643 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 13 00:39:03.659300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 13 00:39:03.663649 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 13 00:39:03.667521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:03.683173 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 13 00:39:03.696187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:39:03.700838 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:39:03.724521 disk-uuid[626]: Primary Header is updated. Mar 13 00:39:03.724521 disk-uuid[626]: Secondary Entries is updated. Mar 13 00:39:03.724521 disk-uuid[626]: Secondary Header is updated. Mar 13 00:39:03.733490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:39:03.736467 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:39:03.817537 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:03.821472 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:03.821497 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:03.823465 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 13 00:39:03.826961 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:39:03.826981 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 13 00:39:03.830281 kernel: ata3.00: applying bridge limits Mar 13 00:39:03.833475 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:03.833494 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 13 00:39:03.835451 kernel: ata3.00: LPM support broken, forcing max_power Mar 13 00:39:03.838694 kernel: ata3.00: configured for UDMA/100 Mar 13 00:39:03.842506 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 13 00:39:03.911116 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 13 00:39:03.911572 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 13 00:39:03.934503 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 13 00:39:04.370377 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:39:04.374358 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:39:04.380610 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:39:04.387558 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:39:04.395026 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:39:04.435939 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:39:04.737536 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 13 00:39:04.737924 disk-uuid[627]: The operation has completed successfully. Mar 13 00:39:04.772227 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:39:04.772394 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:39:04.811996 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:39:04.839841 sh[655]: Success Mar 13 00:39:04.864811 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:39:04.864849 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:39:04.868104 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:39:04.881509 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 13 00:39:04.919980 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:39:04.925993 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:39:04.943172 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:39:04.954467 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (667) Mar 13 00:39:04.961332 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:39:04.961356 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:04.973794 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:39:04.973824 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:39:04.975683 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:39:04.976252 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:39:04.981061 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:39:04.982067 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:39:05.008333 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:39:05.037556 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (690) Mar 13 00:39:05.043715 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:05.043739 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:05.053376 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:39:05.053400 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:39:05.063480 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:05.064237 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:39:05.066366 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:39:05.161026 ignition[741]: Ignition 2.22.0 Mar 13 00:39:05.161062 ignition[741]: Stage: fetch-offline Mar 13 00:39:05.161103 ignition[741]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:05.161114 ignition[741]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:39:05.161196 ignition[741]: parsed url from cmdline: "" Mar 13 00:39:05.161200 ignition[741]: no config URL provided Mar 13 00:39:05.161206 ignition[741]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:39:05.161216 ignition[741]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:39:05.161238 ignition[741]: op(1): [started] loading QEMU firmware config module Mar 13 00:39:05.161244 ignition[741]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 13 00:39:05.172069 ignition[741]: op(1): [finished] loading QEMU firmware config module Mar 13 00:39:05.188401 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:39:05.195183 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:39:05.241568 systemd-networkd[845]: lo: Link UP Mar 13 00:39:05.241605 systemd-networkd[845]: lo: Gained carrier Mar 13 00:39:05.243757 systemd-networkd[845]: Enumeration completed Mar 13 00:39:05.244513 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:39:05.246624 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:05.246629 systemd-networkd[845]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:39:05.247203 systemd-networkd[845]: eth0: Link UP Mar 13 00:39:05.250853 systemd-networkd[845]: eth0: Gained carrier Mar 13 00:39:05.250865 systemd-networkd[845]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:05.251605 systemd[1]: Reached target network.target - Network. Mar 13 00:39:05.285484 systemd-networkd[845]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:39:05.402363 ignition[741]: parsing config with SHA512: e0b8a7e313b3d8d083c4584146baa850a7e22d7fa8095c6eaa36937f276630cad7ea63e2c22c327cfc85dc49a8657b928b9817d5c2dfabaeb82acf9ea8129cee Mar 13 00:39:05.409622 unknown[741]: fetched base config from "system" Mar 13 00:39:05.409648 unknown[741]: fetched user config from "qemu" Mar 13 00:39:05.410044 ignition[741]: fetch-offline: fetch-offline passed Mar 13 00:39:05.412847 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:39:05.410101 ignition[741]: Ignition finished successfully Mar 13 00:39:05.421077 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 13 00:39:05.422531 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:39:05.465681 ignition[850]: Ignition 2.22.0 Mar 13 00:39:05.465721 ignition[850]: Stage: kargs Mar 13 00:39:05.465848 ignition[850]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:05.465860 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:39:05.466521 ignition[850]: kargs: kargs passed Mar 13 00:39:05.466583 ignition[850]: Ignition finished successfully Mar 13 00:39:05.476868 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:39:05.484527 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:39:05.527949 ignition[858]: Ignition 2.22.0 Mar 13 00:39:05.527978 ignition[858]: Stage: disks Mar 13 00:39:05.528110 ignition[858]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:05.528121 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:39:05.528953 ignition[858]: disks: disks passed Mar 13 00:39:05.534504 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:39:05.528996 ignition[858]: Ignition finished successfully Mar 13 00:39:05.537571 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:39:05.541478 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:39:05.547119 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:39:05.552601 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:39:05.558203 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:39:05.565468 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:39:05.611701 systemd-fsck[868]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 13 00:39:05.618200 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:39:05.628563 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:39:05.766529 kernel: EXT4-fs (vda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:39:05.767794 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:39:05.773110 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:39:05.780633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:39:05.784687 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:39:05.789344 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:39:05.789388 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:39:05.789463 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:39:05.823923 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:39:05.840634 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (876) Mar 13 00:39:05.840664 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:05.840681 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:05.840937 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:39:05.852205 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:39:05.852240 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:39:05.854764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:39:05.897686 initrd-setup-root[901]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:39:05.905255 initrd-setup-root[908]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:39:05.912318 initrd-setup-root[915]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:39:05.920350 initrd-setup-root[922]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:39:06.041339 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:39:06.046191 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:39:06.059836 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:39:06.073214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:39:06.080271 kernel: BTRFS info (device vda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:06.102074 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:39:06.120436 ignition[991]: INFO : Ignition 2.22.0 Mar 13 00:39:06.120436 ignition[991]: INFO : Stage: mount Mar 13 00:39:06.125517 ignition[991]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:06.125517 ignition[991]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:39:06.125517 ignition[991]: INFO : mount: mount passed Mar 13 00:39:06.125517 ignition[991]: INFO : Ignition finished successfully Mar 13 00:39:06.139108 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:39:06.146373 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:39:06.183069 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:39:06.227698 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1003) Mar 13 00:39:06.227729 kernel: BTRFS info (device vda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:39:06.227741 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:39:06.236990 kernel: BTRFS info (device vda6): turning on async discard Mar 13 00:39:06.237015 kernel: BTRFS info (device vda6): enabling free space tree Mar 13 00:39:06.238998 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:39:06.292163 ignition[1020]: INFO : Ignition 2.22.0 Mar 13 00:39:06.292163 ignition[1020]: INFO : Stage: files Mar 13 00:39:06.292163 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:06.292163 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:39:06.303134 ignition[1020]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:39:06.307019 ignition[1020]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:39:06.307019 ignition[1020]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:39:06.311721 systemd-networkd[845]: eth0: Gained IPv6LL Mar 13 00:39:06.318267 ignition[1020]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:39:06.323792 ignition[1020]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:39:06.323792 ignition[1020]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:39:06.323792 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:39:06.323792 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:39:06.319238 unknown[1020]: wrote ssh authorized keys file for user: core Mar 13 00:39:06.365859 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:39:06.494489 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:39:06.494489 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:39:06.507005 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 00:39:06.780816 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:39:07.334239 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:39:07.334239 ignition[1020]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 13 00:39:07.346325 ignition[1020]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:39:07.395283 ignition[1020]: INFO : files: files passed Mar 13 00:39:07.395283 ignition[1020]: INFO : Ignition finished successfully Mar 13 00:39:07.381082 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:39:07.389175 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:39:07.396139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:39:07.417236 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:39:07.463985 initrd-setup-root-after-ignition[1048]: grep: /sysroot/oem/oem-release: No such file or directory Mar 13 00:39:07.417343 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:39:07.472162 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:39:07.472162 initrd-setup-root-after-ignition[1050]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:39:07.427668 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:39:07.486780 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:39:07.432230 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:39:07.441301 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:39:07.502774 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:39:07.503006 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:39:07.506199 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:39:07.513641 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:39:07.520016 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:39:07.531164 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:39:07.579841 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:39:07.587039 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:39:07.621743 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:39:07.625295 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:39:07.628657 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:39:07.635008 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:39:07.635147 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:39:07.649177 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:39:07.652262 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:39:07.652475 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:39:07.657731 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:39:07.663578 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:39:07.669578 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:39:07.682059 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:39:07.690724 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:39:07.694454 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:39:07.700943 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:39:07.701104 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:39:07.711576 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:39:07.711700 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:39:07.719285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:39:07.722184 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:39:07.727985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:39:07.728183 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:39:07.734469 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:39:07.734605 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:39:07.750181 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:39:07.750321 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:39:07.753369 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:39:07.759186 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:39:07.762621 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:39:07.764661 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:39:07.770630 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:39:07.776157 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:39:07.776262 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:39:07.782799 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:39:07.782882 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:39:07.787802 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:39:07.787962 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:39:07.793019 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:39:07.793168 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:39:07.806592 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:39:07.809978 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:39:07.810113 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:39:07.817563 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:39:07.824735 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:39:07.824868 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:39:07.826334 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:39:07.826490 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:39:07.859534 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:39:07.863584 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:39:07.863744 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:39:07.869988 ignition[1075]: INFO : Ignition 2.22.0 Mar 13 00:39:07.869988 ignition[1075]: INFO : Stage: umount Mar 13 00:39:07.869988 ignition[1075]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:39:07.869988 ignition[1075]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 13 00:39:07.869988 ignition[1075]: INFO : umount: umount passed Mar 13 00:39:07.869988 ignition[1075]: INFO : Ignition finished successfully Mar 13 00:39:07.874639 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:39:07.874840 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:39:07.881846 systemd[1]: Stopped target network.target - Network. Mar 13 00:39:07.884595 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:39:07.884674 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:39:07.898399 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:39:07.898517 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:39:07.901175 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:39:07.901242 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:39:07.906704 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:39:07.906761 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:39:07.912190 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:39:07.918119 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:39:07.935795 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:39:07.935998 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:39:07.946116 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:39:07.946205 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:39:07.948532 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:39:07.948693 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:39:07.962235 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:39:07.962625 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:39:07.962677 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:39:07.972662 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:39:07.974252 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:39:07.974481 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:39:07.984781 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:39:07.985031 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:39:07.992868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:39:07.992962 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:39:08.003623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:39:08.006018 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:39:08.006096 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:39:08.012475 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:39:08.012530 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:39:08.026603 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:39:08.026655 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:39:08.029622 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:39:08.035855 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:39:08.060232 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:39:08.060560 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:39:08.062373 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:39:08.062561 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:39:08.072292 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:39:08.072335 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:39:08.075816 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:39:08.075877 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:39:08.090827 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:39:08.090883 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:39:08.099585 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:39:08.099641 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:39:08.114587 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:39:08.117327 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:39:08.117386 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:39:08.136287 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:39:08.136344 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:39:08.146953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:39:08.147022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:08.160779 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:39:08.161016 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:39:08.164032 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:39:08.164196 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:39:08.179062 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:39:08.183047 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:39:08.209328 systemd[1]: Switching root. Mar 13 00:39:08.262384 systemd-journald[204]: Journal stopped Mar 13 00:39:09.870577 systemd-journald[204]: Received SIGTERM from PID 1 (systemd). Mar 13 00:39:09.870649 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:39:09.870669 kernel: SELinux: policy capability open_perms=1 Mar 13 00:39:09.870684 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:39:09.870697 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:39:09.870709 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:39:09.870720 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:39:09.870736 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:39:09.870747 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:39:09.870759 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:39:09.870770 kernel: audit: type=1403 audit(1773362348.500:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:39:09.870785 systemd[1]: Successfully loaded SELinux policy in 83.110ms. Mar 13 00:39:09.870805 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.319ms. Mar 13 00:39:09.870817 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:39:09.870829 systemd[1]: Detected virtualization kvm. Mar 13 00:39:09.870841 systemd[1]: Detected architecture x86-64. Mar 13 00:39:09.870853 systemd[1]: Detected first boot. Mar 13 00:39:09.870865 systemd[1]: Initializing machine ID from VM UUID. Mar 13 00:39:09.870877 zram_generator::config[1124]: No configuration found. Mar 13 00:39:09.870890 kernel: Guest personality initialized and is inactive Mar 13 00:39:09.870903 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:39:09.870954 kernel: Initialized host personality Mar 13 00:39:09.870967 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:39:09.870985 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:39:09.870998 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:39:09.871011 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:39:09.871023 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:39:09.871035 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:39:09.871046 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:39:09.871061 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:39:09.871073 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:39:09.871085 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:39:09.871098 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:39:09.871110 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:39:09.871121 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:39:09.871133 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:39:09.871145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:39:09.871160 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:39:09.871176 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:39:09.871188 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:39:09.871200 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:39:09.871213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:39:09.871224 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:39:09.871236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:39:09.871248 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:39:09.871262 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:39:09.871275 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:39:09.871286 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:39:09.871299 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:39:09.871310 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:39:09.871322 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:39:09.871333 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:39:09.871345 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:39:09.871356 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:39:09.871371 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:39:09.871382 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:39:09.871394 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:39:09.871471 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:39:09.871487 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:39:09.871499 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:39:09.871510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:39:09.871522 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:39:09.871533 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:39:09.871549 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:09.871560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:39:09.871572 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:39:09.871583 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:39:09.871595 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:39:09.871608 systemd[1]: Reached target machines.target - Containers. Mar 13 00:39:09.871619 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:39:09.871631 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:39:09.871646 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:39:09.871658 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:39:09.871670 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:39:09.871681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:39:09.871692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:39:09.871704 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:39:09.871715 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:39:09.871727 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:39:09.871738 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:39:09.871753 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:39:09.871764 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:39:09.871776 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:39:09.871787 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:39:09.871799 kernel: fuse: init (API version 7.41) Mar 13 00:39:09.871810 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:39:09.871822 kernel: loop: module loaded Mar 13 00:39:09.871833 kernel: ACPI: bus type drm_connector registered Mar 13 00:39:09.871847 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:39:09.871860 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:39:09.871871 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:39:09.871883 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:39:09.871963 systemd-journald[1209]: Collecting audit messages is disabled. Mar 13 00:39:09.871994 systemd-journald[1209]: Journal started Mar 13 00:39:09.872015 systemd-journald[1209]: Runtime Journal (/run/log/journal/12c699256157400e9725b7b97354b908) is 6M, max 48.3M, 42.2M free. Mar 13 00:39:09.303544 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:39:09.325264 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 13 00:39:09.326181 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:39:09.879511 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:39:09.889079 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:39:09.889177 systemd[1]: Stopped verity-setup.service. Mar 13 00:39:09.900533 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:09.909501 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:39:09.914974 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:39:09.918476 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:39:09.923205 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:39:09.927490 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:39:09.931908 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:39:09.936119 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:39:09.940784 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:39:09.947340 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:39:09.952583 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:39:09.953052 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:39:09.958303 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:39:09.958816 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:39:09.964153 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:39:09.964644 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:39:09.969497 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:39:09.969961 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:39:09.974640 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:39:09.975117 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:39:09.979734 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:39:09.980065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:39:09.984016 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:39:09.988213 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:39:09.992555 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:39:09.997327 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:39:10.014701 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:39:10.019749 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:39:10.024332 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:39:10.027628 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:39:10.027661 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:39:10.032159 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:39:10.041777 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:39:10.046186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:39:10.048265 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:39:10.053492 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:39:10.057275 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:39:10.060126 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:39:10.064540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:39:10.065806 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:39:10.073658 systemd-journald[1209]: Time spent on flushing to /var/log/journal/12c699256157400e9725b7b97354b908 is 26.605ms for 969 entries. Mar 13 00:39:10.073658 systemd-journald[1209]: System Journal (/var/log/journal/12c699256157400e9725b7b97354b908) is 8M, max 195.6M, 187.6M free. Mar 13 00:39:10.120821 systemd-journald[1209]: Received client request to flush runtime journal. Mar 13 00:39:10.120965 kernel: loop0: detected capacity change from 0 to 110984 Mar 13 00:39:10.074568 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:39:10.098763 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:39:10.105980 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:39:10.110234 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:39:10.114813 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:39:10.122799 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:39:10.130829 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:39:10.136493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:39:10.146233 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:39:10.154629 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:39:10.171028 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:39:10.176991 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:39:10.185532 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:39:10.194568 kernel: loop1: detected capacity change from 0 to 228704 Mar 13 00:39:10.196591 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:39:10.200657 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:39:10.225148 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 13 00:39:10.225166 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Mar 13 00:39:10.231060 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:39:10.239465 kernel: loop2: detected capacity change from 0 to 128560 Mar 13 00:39:10.273465 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:39:10.289475 kernel: loop4: detected capacity change from 0 to 228704 Mar 13 00:39:10.302469 kernel: loop5: detected capacity change from 0 to 128560 Mar 13 00:39:10.315289 (sd-merge)[1268]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 13 00:39:10.316475 (sd-merge)[1268]: Merged extensions into '/usr'. Mar 13 00:39:10.321571 systemd[1]: Reload requested from client PID 1243 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:39:10.321607 systemd[1]: Reloading... Mar 13 00:39:10.388477 zram_generator::config[1291]: No configuration found. Mar 13 00:39:10.453247 ldconfig[1238]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:39:10.598039 systemd[1]: Reloading finished in 275 ms. Mar 13 00:39:10.635754 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:39:10.639547 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:39:10.644653 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:39:10.679133 systemd[1]: Starting ensure-sysext.service... Mar 13 00:39:10.683879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:39:10.706614 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:39:10.721465 systemd[1]: Reload requested from client PID 1332 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:39:10.721495 systemd[1]: Reloading... Mar 13 00:39:10.726566 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:39:10.726633 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:39:10.727031 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:39:10.727346 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:39:10.728568 systemd-tmpfiles[1333]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:39:10.728852 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Mar 13 00:39:10.728989 systemd-tmpfiles[1333]: ACLs are not supported, ignoring. Mar 13 00:39:10.734743 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:39:10.734773 systemd-tmpfiles[1333]: Skipping /boot Mar 13 00:39:10.744661 systemd-udevd[1334]: Using default interface naming scheme 'v255'. Mar 13 00:39:10.748870 systemd-tmpfiles[1333]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:39:10.748904 systemd-tmpfiles[1333]: Skipping /boot Mar 13 00:39:10.787491 zram_generator::config[1361]: No configuration found. Mar 13 00:39:10.949491 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:39:10.976796 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 13 00:39:10.986488 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:39:11.003469 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 13 00:39:11.007490 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 13 00:39:11.008305 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 13 00:39:11.012854 systemd[1]: Reloading finished in 290 ms. Mar 13 00:39:11.026342 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:39:11.030750 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:39:11.095910 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:39:11.115126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:11.118636 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:39:11.127686 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:39:11.131398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:39:11.137726 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:39:11.147584 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:39:11.154631 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:39:11.160281 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:39:11.163778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:39:11.165283 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:39:11.169228 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:39:11.171770 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:39:11.186259 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:39:11.193871 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:39:11.200652 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:39:11.206538 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:39:11.209317 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:39:11.210699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:39:11.211917 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:39:11.212339 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:39:11.213098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:39:11.213347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:39:11.216051 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:39:11.216530 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:39:11.228623 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:39:11.233956 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:39:11.244853 augenrules[1485]: No rules Mar 13 00:39:11.250972 systemd[1]: Finished ensure-sysext.service. Mar 13 00:39:11.256095 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:39:11.257813 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:39:11.283899 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:39:11.295082 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:39:11.316530 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:39:11.316741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:39:11.319919 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 13 00:39:11.325739 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:39:11.341246 kernel: kvm_amd: TSC scaling supported Mar 13 00:39:11.341284 kernel: kvm_amd: Nested Virtualization enabled Mar 13 00:39:11.341310 kernel: kvm_amd: Nested Paging enabled Mar 13 00:39:11.345804 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 13 00:39:11.345848 kernel: kvm_amd: PMU virtualization is disabled Mar 13 00:39:11.352082 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:39:11.359725 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:39:11.367068 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:39:11.382760 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:39:11.441544 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:39:11.493212 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:39:11.595805 systemd-resolved[1469]: Positive Trust Anchors: Mar 13 00:39:11.596278 systemd-resolved[1469]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:39:11.596308 systemd-resolved[1469]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:39:11.601115 systemd-resolved[1469]: Defaulting to hostname 'linux'. Mar 13 00:39:11.601792 systemd-networkd[1467]: lo: Link UP Mar 13 00:39:11.602136 systemd-networkd[1467]: lo: Gained carrier Mar 13 00:39:11.604700 systemd-networkd[1467]: Enumeration completed Mar 13 00:39:11.605689 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:11.605715 systemd-networkd[1467]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:39:11.606582 systemd-networkd[1467]: eth0: Link UP Mar 13 00:39:11.607012 systemd-networkd[1467]: eth0: Gained carrier Mar 13 00:39:11.607084 systemd-networkd[1467]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:39:11.639473 systemd-networkd[1467]: eth0: DHCPv4 address 10.0.0.55/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 13 00:39:11.641637 systemd-timesyncd[1496]: Network configuration changed, trying to establish connection. Mar 13 00:39:11.642729 systemd-timesyncd[1496]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 13 00:39:11.642850 systemd-timesyncd[1496]: Initial clock synchronization to Fri 2026-03-13 00:39:12.030618 UTC. Mar 13 00:39:11.687162 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 13 00:39:11.692013 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:39:11.695755 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:39:11.699989 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:39:11.704509 systemd[1]: Reached target network.target - Network. Mar 13 00:39:11.707605 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:39:11.711645 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:39:11.715169 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:39:11.718853 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:39:11.723923 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:39:11.738579 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:39:11.744042 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:39:11.744127 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:39:11.746824 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:39:11.750140 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:39:11.754669 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:39:11.758901 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:39:11.763756 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:39:11.771453 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:39:11.776787 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:39:11.781580 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:39:11.785593 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:39:11.793764 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:39:11.797467 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:39:11.893740 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:39:11.910104 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:39:11.915297 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:39:11.921205 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:39:11.925121 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:39:11.928788 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:39:11.928871 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:39:11.934849 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:39:11.941076 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:39:11.949573 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:39:11.955612 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:39:11.961035 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:39:11.964586 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:39:11.965973 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:39:11.970665 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:39:11.979381 jq[1523]: false Mar 13 00:39:11.978546 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Mar 13 00:39:11.980067 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Mar 13 00:39:11.977832 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:39:11.983195 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:39:11.992919 extend-filesystems[1524]: Found /dev/vda6 Mar 13 00:39:11.996224 oslogin_cache_refresh[1525]: Failure getting users, quitting Mar 13 00:39:11.996681 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Mar 13 00:39:11.996681 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:39:11.996681 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Mar 13 00:39:11.996247 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:39:11.996340 oslogin_cache_refresh[1525]: Refreshing group entry cache Mar 13 00:39:11.997501 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:39:12.001129 extend-filesystems[1524]: Found /dev/vda9 Mar 13 00:39:12.001129 extend-filesystems[1524]: Checking size of /dev/vda9 Mar 13 00:39:12.021144 extend-filesystems[1524]: Resized partition /dev/vda9 Mar 13 00:39:12.013628 oslogin_cache_refresh[1525]: Failure getting groups, quitting Mar 13 00:39:12.025825 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Mar 13 00:39:12.025825 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:39:12.005279 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:39:12.026261 extend-filesystems[1543]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:39:12.033399 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 13 00:39:12.013651 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:39:12.011943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 13 00:39:12.012633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:39:12.013272 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:39:12.029143 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:39:12.064333 jq[1547]: true Mar 13 00:39:12.042021 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:39:12.053182 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:39:12.061399 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:39:12.061749 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:39:12.062153 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:39:12.062430 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:39:12.071648 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:39:12.072790 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:39:12.083037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:39:12.083509 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:39:12.107567 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 13 00:39:12.117716 update_engine[1540]: I20260313 00:39:12.117619 1540 main.cc:92] Flatcar Update Engine starting Mar 13 00:39:12.360583 extend-filesystems[1543]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 13 00:39:12.360583 extend-filesystems[1543]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 13 00:39:12.360583 extend-filesystems[1543]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 13 00:39:12.387778 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Mar 13 00:39:12.396696 jq[1553]: true Mar 13 00:39:12.362629 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:39:12.396864 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:39:12.397024 tar[1552]: linux-amd64/LICENSE Mar 13 00:39:12.397024 tar[1552]: linux-amd64/helm Mar 13 00:39:12.363003 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:39:12.379998 (ntainerd)[1555]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:39:12.384422 systemd-logind[1537]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:39:12.384506 systemd-logind[1537]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:39:12.387758 systemd-logind[1537]: New seat seat0. Mar 13 00:39:12.403255 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:39:12.414691 dbus-daemon[1521]: [system] SELinux support is enabled Mar 13 00:39:12.414868 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:39:12.420687 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:39:12.428023 update_engine[1540]: I20260313 00:39:12.424053 1540 update_check_scheduler.cc:74] Next update check in 9m48s Mar 13 00:39:12.420714 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:39:12.424491 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:39:12.424511 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:39:12.435604 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:39:12.439559 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:39:12.443697 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:39:12.455754 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:39:12.461402 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:39:12.464606 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:39:12.478839 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:39:12.483014 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 13 00:39:12.829945 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:39:12.830362 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:39:12.841162 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:39:12.856366 locksmithd[1593]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:39:12.931362 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:39:12.937804 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:39:12.945798 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:39:12.949639 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:39:13.115099 systemd-networkd[1467]: eth0: Gained IPv6LL Mar 13 00:39:13.277093 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:39:13.282570 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:39:13.289757 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 13 00:39:13.356675 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:13.362691 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:39:13.552190 kernel: hrtimer: interrupt took 8909181 ns Mar 13 00:39:13.628710 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 13 00:39:13.638083 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 13 00:39:13.678871 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:39:13.712326 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:39:13.732194 containerd[1555]: time="2026-03-13T00:39:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:39:13.734510 containerd[1555]: time="2026-03-13T00:39:13.734050175Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:39:13.769021 containerd[1555]: time="2026-03-13T00:39:13.768942024Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="93.316µs" Mar 13 00:39:13.769155 containerd[1555]: time="2026-03-13T00:39:13.769138181Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:39:13.769243 containerd[1555]: time="2026-03-13T00:39:13.769225889Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:39:13.769934 containerd[1555]: time="2026-03-13T00:39:13.769911842Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:39:13.770027 containerd[1555]: time="2026-03-13T00:39:13.770010298Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:39:13.770221 containerd[1555]: time="2026-03-13T00:39:13.770199760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:39:13.770409 containerd[1555]: time="2026-03-13T00:39:13.770385316Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:39:13.770605 containerd[1555]: time="2026-03-13T00:39:13.770584377Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:39:13.771284 containerd[1555]: time="2026-03-13T00:39:13.771256596Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:39:13.771371 containerd[1555]: time="2026-03-13T00:39:13.771350110Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:39:13.771426 containerd[1555]: time="2026-03-13T00:39:13.771412555Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:39:13.771559 containerd[1555]: time="2026-03-13T00:39:13.771537620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:39:13.771903 containerd[1555]: time="2026-03-13T00:39:13.771880388Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:39:13.772679 containerd[1555]: time="2026-03-13T00:39:13.772607198Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:39:13.772833 containerd[1555]: time="2026-03-13T00:39:13.772811231Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:39:13.772889 containerd[1555]: time="2026-03-13T00:39:13.772875784Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:39:13.773272 containerd[1555]: time="2026-03-13T00:39:13.773251304Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:39:13.774642 containerd[1555]: time="2026-03-13T00:39:13.774613311Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:39:13.774891 containerd[1555]: time="2026-03-13T00:39:13.774871191Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:39:13.785520 containerd[1555]: time="2026-03-13T00:39:13.785489546Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:39:13.786021 containerd[1555]: time="2026-03-13T00:39:13.785919186Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:39:13.786305 containerd[1555]: time="2026-03-13T00:39:13.786285484Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:39:13.786511 containerd[1555]: time="2026-03-13T00:39:13.786489349Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:39:13.786624 containerd[1555]: time="2026-03-13T00:39:13.786606811Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:39:13.787016 containerd[1555]: time="2026-03-13T00:39:13.786897621Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:39:13.787016 containerd[1555]: time="2026-03-13T00:39:13.786942634Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:39:13.787265 containerd[1555]: time="2026-03-13T00:39:13.787170154Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:39:13.787476 containerd[1555]: time="2026-03-13T00:39:13.787370728Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:39:13.787544 containerd[1555]: time="2026-03-13T00:39:13.787397904Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:39:13.787687 containerd[1555]: time="2026-03-13T00:39:13.787580411Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:39:13.787890 containerd[1555]: time="2026-03-13T00:39:13.787808672Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:39:13.788519 containerd[1555]: time="2026-03-13T00:39:13.788392412Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:39:13.788755 containerd[1555]: time="2026-03-13T00:39:13.788735712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:39:13.788912 containerd[1555]: time="2026-03-13T00:39:13.788887179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789012505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789029979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789293091Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789352110Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789365582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789377415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789388800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:39:13.789487 containerd[1555]: time="2026-03-13T00:39:13.789398700Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:39:13.790095 containerd[1555]: time="2026-03-13T00:39:13.790072288Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:39:13.790272 containerd[1555]: time="2026-03-13T00:39:13.790254993Z" level=info msg="Start snapshots syncer" Mar 13 00:39:13.790511 containerd[1555]: time="2026-03-13T00:39:13.790489083Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:39:13.791645 containerd[1555]: time="2026-03-13T00:39:13.791608217Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:39:13.792154 containerd[1555]: time="2026-03-13T00:39:13.792132866Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:39:13.792540 containerd[1555]: time="2026-03-13T00:39:13.792516166Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:39:13.792756 containerd[1555]: time="2026-03-13T00:39:13.792735457Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:39:13.792923 containerd[1555]: time="2026-03-13T00:39:13.792814945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:39:13.793000 containerd[1555]: time="2026-03-13T00:39:13.792984543Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:39:13.793102 containerd[1555]: time="2026-03-13T00:39:13.793084888Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:39:13.793184 containerd[1555]: time="2026-03-13T00:39:13.793169671Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:39:13.793246 containerd[1555]: time="2026-03-13T00:39:13.793232847Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:39:13.793322 containerd[1555]: time="2026-03-13T00:39:13.793307395Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:39:13.793530 containerd[1555]: time="2026-03-13T00:39:13.793420241Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:39:13.793594 containerd[1555]: time="2026-03-13T00:39:13.793580899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:39:13.793640 containerd[1555]: time="2026-03-13T00:39:13.793628440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:39:13.793737 containerd[1555]: time="2026-03-13T00:39:13.793720878Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:39:13.793795 containerd[1555]: time="2026-03-13T00:39:13.793780900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:39:13.793877 containerd[1555]: time="2026-03-13T00:39:13.793854121Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:39:13.793963 containerd[1555]: time="2026-03-13T00:39:13.793937965Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:39:13.794056 containerd[1555]: time="2026-03-13T00:39:13.794028848Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:39:13.794187 containerd[1555]: time="2026-03-13T00:39:13.794167866Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:39:13.794309 containerd[1555]: time="2026-03-13T00:39:13.794280587Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:39:13.794520 containerd[1555]: time="2026-03-13T00:39:13.794439376Z" level=info msg="runtime interface created" Mar 13 00:39:13.794621 containerd[1555]: time="2026-03-13T00:39:13.794606227Z" level=info msg="created NRI interface" Mar 13 00:39:13.794669 containerd[1555]: time="2026-03-13T00:39:13.794656096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:39:13.794712 containerd[1555]: time="2026-03-13T00:39:13.794702311Z" level=info msg="Connect containerd service" Mar 13 00:39:13.794765 containerd[1555]: time="2026-03-13T00:39:13.794754039Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:39:13.797305 containerd[1555]: time="2026-03-13T00:39:13.797137843Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:39:13.869137 tar[1552]: linux-amd64/README.md Mar 13 00:39:13.911130 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:39:14.398635 containerd[1555]: time="2026-03-13T00:39:14.397006793Z" level=info msg="Start subscribing containerd event" Mar 13 00:39:14.398635 containerd[1555]: time="2026-03-13T00:39:14.397353751Z" level=info msg="Start recovering state" Mar 13 00:39:14.398635 containerd[1555]: time="2026-03-13T00:39:14.398141549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:39:14.398635 containerd[1555]: time="2026-03-13T00:39:14.398313537Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:39:14.398635 containerd[1555]: time="2026-03-13T00:39:14.398605096Z" level=info msg="Start event monitor" Mar 13 00:39:14.398952 containerd[1555]: time="2026-03-13T00:39:14.398685863Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:39:14.398952 containerd[1555]: time="2026-03-13T00:39:14.398732070Z" level=info msg="Start streaming server" Mar 13 00:39:14.398952 containerd[1555]: time="2026-03-13T00:39:14.398796545Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:39:14.398952 containerd[1555]: time="2026-03-13T00:39:14.398816495Z" level=info msg="runtime interface starting up..." Mar 13 00:39:14.398952 containerd[1555]: time="2026-03-13T00:39:14.398827998Z" level=info msg="starting plugins..." Mar 13 00:39:14.398952 containerd[1555]: time="2026-03-13T00:39:14.398884939Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:39:14.400277 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:39:14.400721 containerd[1555]: time="2026-03-13T00:39:14.400560065Z" level=info msg="containerd successfully booted in 0.669488s" Mar 13 00:39:15.732981 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:39:15.738520 systemd[1]: Started sshd@0-10.0.0.55:22-10.0.0.1:58512.service - OpenSSH per-connection server daemon (10.0.0.1:58512). Mar 13 00:39:15.905538 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 58512 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:15.907762 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:15.917822 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:39:15.922026 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:39:15.933926 systemd-logind[1537]: New session 1 of user core. Mar 13 00:39:15.957161 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:39:15.963949 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:39:15.981334 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:39:15.985027 systemd-logind[1537]: New session c1 of user core. Mar 13 00:39:16.370276 systemd[1656]: Queued start job for default target default.target. Mar 13 00:39:16.381970 systemd[1656]: Created slice app.slice - User Application Slice. Mar 13 00:39:16.382015 systemd[1656]: Reached target paths.target - Paths. Mar 13 00:39:16.382078 systemd[1656]: Reached target timers.target - Timers. Mar 13 00:39:16.383924 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:39:16.397573 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:39:16.397708 systemd[1656]: Reached target sockets.target - Sockets. Mar 13 00:39:16.397798 systemd[1656]: Reached target basic.target - Basic System. Mar 13 00:39:16.397870 systemd[1656]: Reached target default.target - Main User Target. Mar 13 00:39:16.397921 systemd[1656]: Startup finished in 405ms. Mar 13 00:39:16.398391 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:39:16.403201 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:39:16.431578 systemd[1]: Started sshd@1-10.0.0.55:22-10.0.0.1:58518.service - OpenSSH per-connection server daemon (10.0.0.1:58518). Mar 13 00:39:16.494824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:16.498373 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:39:16.498703 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 58518 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:16.500332 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:16.501634 systemd[1]: Startup finished in 3.702s (kernel) + 6.793s (initrd) + 8.082s (userspace) = 18.578s. Mar 13 00:39:16.507968 systemd-logind[1537]: New session 2 of user core. Mar 13 00:39:16.509606 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:39:16.527641 sshd[1676]: Connection closed by 10.0.0.1 port 58518 Mar 13 00:39:16.528645 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:16.544869 systemd[1]: sshd@1-10.0.0.55:22-10.0.0.1:58518.service: Deactivated successfully. Mar 13 00:39:16.546894 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:39:16.547869 systemd-logind[1537]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:39:16.551552 systemd[1]: Started sshd@2-10.0.0.55:22-10.0.0.1:58532.service - OpenSSH per-connection server daemon (10.0.0.1:58532). Mar 13 00:39:16.553594 systemd-logind[1537]: Removed session 2. Mar 13 00:39:16.584906 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:39:16.623088 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 58532 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:16.624759 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:16.630218 systemd-logind[1537]: New session 3 of user core. Mar 13 00:39:16.639622 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:39:16.648804 sshd[1689]: Connection closed by 10.0.0.1 port 58532 Mar 13 00:39:16.649158 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:16.661168 systemd[1]: sshd@2-10.0.0.55:22-10.0.0.1:58532.service: Deactivated successfully. Mar 13 00:39:16.663131 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:39:16.664110 systemd-logind[1537]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:39:16.666697 systemd[1]: Started sshd@3-10.0.0.55:22-10.0.0.1:58542.service - OpenSSH per-connection server daemon (10.0.0.1:58542). Mar 13 00:39:16.667908 systemd-logind[1537]: Removed session 3. Mar 13 00:39:16.721396 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 58542 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:16.723695 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:16.729176 systemd-logind[1537]: New session 4 of user core. Mar 13 00:39:16.734610 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:39:16.750612 sshd[1703]: Connection closed by 10.0.0.1 port 58542 Mar 13 00:39:16.751564 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:16.761088 systemd[1]: sshd@3-10.0.0.55:22-10.0.0.1:58542.service: Deactivated successfully. Mar 13 00:39:16.763062 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:39:16.764064 systemd-logind[1537]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:39:16.766796 systemd[1]: Started sshd@4-10.0.0.55:22-10.0.0.1:58554.service - OpenSSH per-connection server daemon (10.0.0.1:58554). Mar 13 00:39:16.767996 systemd-logind[1537]: Removed session 4. Mar 13 00:39:16.824843 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 58554 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:16.826827 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:16.832982 systemd-logind[1537]: New session 5 of user core. Mar 13 00:39:16.841653 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:39:16.867618 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:39:16.868058 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:16.891305 sudo[1713]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:16.893156 sshd[1712]: Connection closed by 10.0.0.1 port 58554 Mar 13 00:39:16.893747 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:16.907276 systemd[1]: sshd@4-10.0.0.55:22-10.0.0.1:58554.service: Deactivated successfully. Mar 13 00:39:16.909322 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:39:16.910350 systemd-logind[1537]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:39:16.912934 systemd[1]: Started sshd@5-10.0.0.55:22-10.0.0.1:58558.service - OpenSSH per-connection server daemon (10.0.0.1:58558). Mar 13 00:39:16.914855 systemd-logind[1537]: Removed session 5. Mar 13 00:39:16.993906 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 58558 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:16.995488 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:17.000713 systemd-logind[1537]: New session 6 of user core. Mar 13 00:39:17.011606 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:39:17.030225 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:39:17.030670 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:17.038505 sudo[1725]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:17.216718 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:39:17.217124 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:17.239601 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:39:17.317575 augenrules[1747]: No rules Mar 13 00:39:17.319580 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:39:17.320045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:39:17.321723 sudo[1724]: pam_unix(sudo:session): session closed for user root Mar 13 00:39:17.324156 sshd[1723]: Connection closed by 10.0.0.1 port 58558 Mar 13 00:39:17.325886 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Mar 13 00:39:17.336962 systemd[1]: sshd@5-10.0.0.55:22-10.0.0.1:58558.service: Deactivated successfully. Mar 13 00:39:17.338845 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:39:17.339961 systemd-logind[1537]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:39:17.342844 systemd[1]: Started sshd@6-10.0.0.55:22-10.0.0.1:58572.service - OpenSSH per-connection server daemon (10.0.0.1:58572). Mar 13 00:39:17.345022 systemd-logind[1537]: Removed session 6. Mar 13 00:39:17.419275 kubelet[1675]: E0313 00:39:17.419193 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:39:17.421605 sshd[1756]: Accepted publickey for core from 10.0.0.1 port 58572 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:39:17.423519 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:39:17.423646 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:39:17.423817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:39:17.424361 systemd[1]: kubelet.service: Consumed 3.255s CPU time, 272.1M memory peak. Mar 13 00:39:17.430966 systemd-logind[1537]: New session 7 of user core. Mar 13 00:39:17.445763 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:39:17.462692 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:39:17.463087 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:39:18.857909 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:39:18.888944 (dockerd)[1782]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:39:19.963921 dockerd[1782]: time="2026-03-13T00:39:19.963674240Z" level=info msg="Starting up" Mar 13 00:39:19.966336 dockerd[1782]: time="2026-03-13T00:39:19.966295327Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:39:20.012909 dockerd[1782]: time="2026-03-13T00:39:20.012793767Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:39:20.181932 dockerd[1782]: time="2026-03-13T00:39:20.181788902Z" level=info msg="Loading containers: start." Mar 13 00:39:20.197503 kernel: Initializing XFRM netlink socket Mar 13 00:39:20.824069 systemd-networkd[1467]: docker0: Link UP Mar 13 00:39:20.831048 dockerd[1782]: time="2026-03-13T00:39:20.830926805Z" level=info msg="Loading containers: done." Mar 13 00:39:20.858198 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck159309524-merged.mount: Deactivated successfully. Mar 13 00:39:20.860660 dockerd[1782]: time="2026-03-13T00:39:20.860572950Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:39:20.860829 dockerd[1782]: time="2026-03-13T00:39:20.860787327Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:39:20.861125 dockerd[1782]: time="2026-03-13T00:39:20.861062860Z" level=info msg="Initializing buildkit" Mar 13 00:39:20.922324 dockerd[1782]: time="2026-03-13T00:39:20.922195998Z" level=info msg="Completed buildkit initialization" Mar 13 00:39:20.928139 dockerd[1782]: time="2026-03-13T00:39:20.928055496Z" level=info msg="Daemon has completed initialization" Mar 13 00:39:20.928970 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:39:20.937955 dockerd[1782]: time="2026-03-13T00:39:20.937692094Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:39:22.352657 containerd[1555]: time="2026-03-13T00:39:22.352401541Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:39:23.166883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4053546597.mount: Deactivated successfully. Mar 13 00:39:25.475601 containerd[1555]: time="2026-03-13T00:39:25.475453794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:25.476513 containerd[1555]: time="2026-03-13T00:39:25.475820738Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 13 00:39:25.477259 containerd[1555]: time="2026-03-13T00:39:25.477191953Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:25.480596 containerd[1555]: time="2026-03-13T00:39:25.480549998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:25.481680 containerd[1555]: time="2026-03-13T00:39:25.481633456Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 3.129059892s" Mar 13 00:39:25.481680 containerd[1555]: time="2026-03-13T00:39:25.481676328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 00:39:25.485023 containerd[1555]: time="2026-03-13T00:39:25.484989556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:39:27.365775 containerd[1555]: time="2026-03-13T00:39:27.365575082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:27.366822 containerd[1555]: time="2026-03-13T00:39:27.366467798Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 13 00:39:27.368015 containerd[1555]: time="2026-03-13T00:39:27.367962189Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:27.370982 containerd[1555]: time="2026-03-13T00:39:27.370918725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:27.371992 containerd[1555]: time="2026-03-13T00:39:27.371929567Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.886901933s" Mar 13 00:39:27.371992 containerd[1555]: time="2026-03-13T00:39:27.371986372Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 00:39:27.374017 containerd[1555]: time="2026-03-13T00:39:27.373971047Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:39:27.674837 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:39:27.677144 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:28.927245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:28.950062 (kubelet)[2076]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:39:29.301508 kubelet[2076]: E0313 00:39:29.301225 2076 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:39:29.310001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:39:29.310493 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:39:29.311342 systemd[1]: kubelet.service: Consumed 1.294s CPU time, 111.3M memory peak. Mar 13 00:39:29.854582 containerd[1555]: time="2026-03-13T00:39:29.854486667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:29.855790 containerd[1555]: time="2026-03-13T00:39:29.855664127Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 13 00:39:29.857284 containerd[1555]: time="2026-03-13T00:39:29.857159812Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:29.860164 containerd[1555]: time="2026-03-13T00:39:29.860072987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:29.861498 containerd[1555]: time="2026-03-13T00:39:29.861453413Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 2.487432987s" Mar 13 00:39:29.861498 containerd[1555]: time="2026-03-13T00:39:29.861498038Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 00:39:29.863277 containerd[1555]: time="2026-03-13T00:39:29.863185676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:39:31.108032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331716011.mount: Deactivated successfully. Mar 13 00:39:32.479689 containerd[1555]: time="2026-03-13T00:39:32.479455094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:32.480663 containerd[1555]: time="2026-03-13T00:39:32.480106065Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 13 00:39:32.481249 containerd[1555]: time="2026-03-13T00:39:32.481180927Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:32.483577 containerd[1555]: time="2026-03-13T00:39:32.483539294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:32.484497 containerd[1555]: time="2026-03-13T00:39:32.484390449Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.621003275s" Mar 13 00:39:32.484497 containerd[1555]: time="2026-03-13T00:39:32.484467740Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 00:39:32.485855 containerd[1555]: time="2026-03-13T00:39:32.485805954Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:39:32.972636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3143713039.mount: Deactivated successfully. Mar 13 00:39:34.472265 containerd[1555]: time="2026-03-13T00:39:34.472066710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:34.473317 containerd[1555]: time="2026-03-13T00:39:34.473005826Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 13 00:39:34.474273 containerd[1555]: time="2026-03-13T00:39:34.474216684Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:34.477281 containerd[1555]: time="2026-03-13T00:39:34.477217453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:34.478329 containerd[1555]: time="2026-03-13T00:39:34.478239707Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.992391228s" Mar 13 00:39:34.478329 containerd[1555]: time="2026-03-13T00:39:34.478296317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 00:39:34.479886 containerd[1555]: time="2026-03-13T00:39:34.479721563Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:39:34.861472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985526634.mount: Deactivated successfully. Mar 13 00:39:34.867639 containerd[1555]: time="2026-03-13T00:39:34.867549946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:34.868809 containerd[1555]: time="2026-03-13T00:39:34.868757921Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 13 00:39:34.870012 containerd[1555]: time="2026-03-13T00:39:34.869940043Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:34.871956 containerd[1555]: time="2026-03-13T00:39:34.871904153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:39:34.872606 containerd[1555]: time="2026-03-13T00:39:34.872523350Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 392.758184ms" Mar 13 00:39:34.872606 containerd[1555]: time="2026-03-13T00:39:34.872564250Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 00:39:34.873239 containerd[1555]: time="2026-03-13T00:39:34.873149967Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:39:35.315823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount20355962.mount: Deactivated successfully. Mar 13 00:39:36.131881 containerd[1555]: time="2026-03-13T00:39:36.131800149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:36.132905 containerd[1555]: time="2026-03-13T00:39:36.132873008Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 13 00:39:36.134326 containerd[1555]: time="2026-03-13T00:39:36.134278125Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:36.137171 containerd[1555]: time="2026-03-13T00:39:36.137116457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:36.138042 containerd[1555]: time="2026-03-13T00:39:36.137976731Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.264783024s" Mar 13 00:39:36.138042 containerd[1555]: time="2026-03-13T00:39:36.138034462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 00:39:39.425161 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:39:39.426934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:39.460864 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:39:39.461002 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:39:39.461375 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:39.465343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:39.489201 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Mar 13 00:39:39.489343 systemd[1]: Reloading... Mar 13 00:39:39.573558 zram_generator::config[2296]: No configuration found. Mar 13 00:39:39.787227 systemd[1]: Reloading finished in 297 ms. Mar 13 00:39:39.863046 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:39:39.863204 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:39:39.863611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:39.863667 systemd[1]: kubelet.service: Consumed 159ms CPU time, 98.1M memory peak. Mar 13 00:39:39.865294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:40.035196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:40.048774 (kubelet)[2339]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:39:40.093805 kubelet[2339]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:39:40.093805 kubelet[2339]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:39:40.093805 kubelet[2339]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:39:40.094157 kubelet[2339]: I0313 00:39:40.093863 2339 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:39:40.698468 kubelet[2339]: I0313 00:39:40.698371 2339 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:39:40.698468 kubelet[2339]: I0313 00:39:40.698454 2339 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:39:40.698734 kubelet[2339]: I0313 00:39:40.698668 2339 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:39:40.727653 kubelet[2339]: E0313 00:39:40.727591 2339 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.55:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:39:40.729327 kubelet[2339]: I0313 00:39:40.729256 2339 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:39:40.735676 kubelet[2339]: I0313 00:39:40.735623 2339 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:39:40.742874 kubelet[2339]: I0313 00:39:40.742807 2339 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:39:40.743230 kubelet[2339]: I0313 00:39:40.743142 2339 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:39:40.743512 kubelet[2339]: I0313 00:39:40.743209 2339 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:39:40.743637 kubelet[2339]: I0313 00:39:40.743538 2339 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:39:40.743637 kubelet[2339]: I0313 00:39:40.743550 2339 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:39:40.743790 kubelet[2339]: I0313 00:39:40.743743 2339 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:40.747321 kubelet[2339]: I0313 00:39:40.747263 2339 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:39:40.747321 kubelet[2339]: I0313 00:39:40.747305 2339 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:39:40.747389 kubelet[2339]: I0313 00:39:40.747367 2339 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:39:40.747499 kubelet[2339]: I0313 00:39:40.747461 2339 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:39:40.750986 kubelet[2339]: E0313 00:39:40.750929 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:40.751365 kubelet[2339]: E0313 00:39:40.751282 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:40.753505 kubelet[2339]: I0313 00:39:40.753401 2339 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:39:40.754156 kubelet[2339]: I0313 00:39:40.754116 2339 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:39:40.754980 kubelet[2339]: W0313 00:39:40.754915 2339 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:39:40.760262 kubelet[2339]: I0313 00:39:40.760191 2339 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:39:40.760316 kubelet[2339]: I0313 00:39:40.760292 2339 server.go:1289] "Started kubelet" Mar 13 00:39:40.760621 kubelet[2339]: I0313 00:39:40.760480 2339 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:39:40.761084 kubelet[2339]: I0313 00:39:40.761066 2339 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:39:40.761260 kubelet[2339]: I0313 00:39:40.761138 2339 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:39:40.762270 kubelet[2339]: I0313 00:39:40.762233 2339 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:39:40.762667 kubelet[2339]: I0313 00:39:40.762507 2339 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:39:40.764571 kubelet[2339]: I0313 00:39:40.764520 2339 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:39:40.764845 kubelet[2339]: I0313 00:39:40.764828 2339 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:39:40.765193 kubelet[2339]: E0313 00:39:40.765173 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:40.765575 kubelet[2339]: E0313 00:39:40.764515 2339 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.55:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.55:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c3fb08b41557e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-13 00:39:40.760241534 +0000 UTC m=+0.707064188,LastTimestamp:2026-03-13 00:39:40.760241534 +0000 UTC m=+0.707064188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 13 00:39:40.767832 kubelet[2339]: E0313 00:39:40.767725 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="200ms" Mar 13 00:39:40.767885 kubelet[2339]: I0313 00:39:40.767831 2339 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:39:40.768090 kubelet[2339]: I0313 00:39:40.767998 2339 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:39:40.768658 kubelet[2339]: E0313 00:39:40.768594 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.55:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:39:40.769267 kubelet[2339]: E0313 00:39:40.769182 2339 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:39:40.769478 kubelet[2339]: I0313 00:39:40.769353 2339 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:39:40.771114 kubelet[2339]: I0313 00:39:40.771084 2339 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:39:40.771114 kubelet[2339]: I0313 00:39:40.771110 2339 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:39:40.788775 kubelet[2339]: I0313 00:39:40.788745 2339 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:39:40.788775 kubelet[2339]: I0313 00:39:40.788759 2339 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:39:40.788775 kubelet[2339]: I0313 00:39:40.788775 2339 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:40.793876 kubelet[2339]: I0313 00:39:40.793797 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:39:40.795706 kubelet[2339]: I0313 00:39:40.795662 2339 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:39:40.796029 kubelet[2339]: I0313 00:39:40.795975 2339 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:39:40.796104 kubelet[2339]: I0313 00:39:40.796034 2339 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:39:40.796104 kubelet[2339]: I0313 00:39:40.796067 2339 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:39:40.796153 kubelet[2339]: E0313 00:39:40.796112 2339 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:39:40.796915 kubelet[2339]: E0313 00:39:40.796820 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:40.866019 kubelet[2339]: E0313 00:39:40.865903 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:40.888161 kubelet[2339]: I0313 00:39:40.888077 2339 policy_none.go:49] "None policy: Start" Mar 13 00:39:40.888161 kubelet[2339]: I0313 00:39:40.888165 2339 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:39:40.888283 kubelet[2339]: I0313 00:39:40.888204 2339 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:39:40.896384 kubelet[2339]: E0313 00:39:40.896320 2339 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 00:39:40.898294 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:39:40.914053 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:39:40.919207 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:39:40.934932 kubelet[2339]: E0313 00:39:40.934828 2339 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:39:40.935386 kubelet[2339]: I0313 00:39:40.935306 2339 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:39:40.935573 kubelet[2339]: I0313 00:39:40.935387 2339 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:39:40.935852 kubelet[2339]: I0313 00:39:40.935808 2339 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:39:40.937400 kubelet[2339]: E0313 00:39:40.937351 2339 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:39:40.937533 kubelet[2339]: E0313 00:39:40.937497 2339 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 13 00:39:40.968609 kubelet[2339]: E0313 00:39:40.968346 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="400ms" Mar 13 00:39:41.037107 kubelet[2339]: I0313 00:39:41.037017 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:39:41.037637 kubelet[2339]: E0313 00:39:41.037576 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Mar 13 00:39:41.113560 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 13 00:39:41.125652 kubelet[2339]: E0313 00:39:41.125562 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:41.129590 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 13 00:39:41.140035 kubelet[2339]: E0313 00:39:41.139965 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:41.143030 systemd[1]: Created slice kubepods-burstable-podcda6d84d38ecbadc12dbb6ba9a315f74.slice - libcontainer container kubepods-burstable-podcda6d84d38ecbadc12dbb6ba9a315f74.slice. Mar 13 00:39:41.145882 kubelet[2339]: E0313 00:39:41.145825 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:41.170258 kubelet[2339]: I0313 00:39:41.170193 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:41.170258 kubelet[2339]: I0313 00:39:41.170251 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:41.170321 kubelet[2339]: I0313 00:39:41.170272 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:41.170321 kubelet[2339]: I0313 00:39:41.170289 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:41.170321 kubelet[2339]: I0313 00:39:41.170303 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda6d84d38ecbadc12dbb6ba9a315f74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda6d84d38ecbadc12dbb6ba9a315f74\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:41.170321 kubelet[2339]: I0313 00:39:41.170317 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:41.170460 kubelet[2339]: I0313 00:39:41.170331 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:41.170460 kubelet[2339]: I0313 00:39:41.170345 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda6d84d38ecbadc12dbb6ba9a315f74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda6d84d38ecbadc12dbb6ba9a315f74\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:41.170460 kubelet[2339]: I0313 00:39:41.170359 2339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda6d84d38ecbadc12dbb6ba9a315f74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda6d84d38ecbadc12dbb6ba9a315f74\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:41.239298 kubelet[2339]: I0313 00:39:41.239152 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:39:41.239619 kubelet[2339]: E0313 00:39:41.239581 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Mar 13 00:39:41.369088 kubelet[2339]: E0313 00:39:41.368996 2339 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.55:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.55:6443: connect: connection refused" interval="800ms" Mar 13 00:39:41.426906 kubelet[2339]: E0313 00:39:41.426766 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.427721 containerd[1555]: time="2026-03-13T00:39:41.427678713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:41.441351 kubelet[2339]: E0313 00:39:41.441149 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.441881 containerd[1555]: time="2026-03-13T00:39:41.441799137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:41.447116 kubelet[2339]: E0313 00:39:41.447084 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.447839 containerd[1555]: time="2026-03-13T00:39:41.447624307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda6d84d38ecbadc12dbb6ba9a315f74,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:41.464726 containerd[1555]: time="2026-03-13T00:39:41.464328799Z" level=info msg="connecting to shim 3d1f7b1da2e9c29661a7f827bd8df0e289b2420090754d6db046fe48166fb99b" address="unix:///run/containerd/s/2a4adf934cb2bf88ff6abc5ce798e4eea78decf1497d48adf55cbae95245b4e6" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:41.472292 containerd[1555]: time="2026-03-13T00:39:41.472233014Z" level=info msg="connecting to shim 07b809bdfa869af2d8577e29b530296641bf03d4e308542be4edda13026dd966" address="unix:///run/containerd/s/656c99b1db991db8d0c32c46389962d8544df8505eecc85cfea775b17cbf660c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:41.481097 containerd[1555]: time="2026-03-13T00:39:41.481066130Z" level=info msg="connecting to shim d1d24e1c1f721e17d4569d79f7eb6067fc92eb453d929c3ceb60afaaf87bf9c8" address="unix:///run/containerd/s/62a7b6469d0018780147c2bb65357e48c7658d8c27005911e7ef609510fa1e34" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:41.503719 systemd[1]: Started cri-containerd-3d1f7b1da2e9c29661a7f827bd8df0e289b2420090754d6db046fe48166fb99b.scope - libcontainer container 3d1f7b1da2e9c29661a7f827bd8df0e289b2420090754d6db046fe48166fb99b. Mar 13 00:39:41.511201 systemd[1]: Started cri-containerd-07b809bdfa869af2d8577e29b530296641bf03d4e308542be4edda13026dd966.scope - libcontainer container 07b809bdfa869af2d8577e29b530296641bf03d4e308542be4edda13026dd966. Mar 13 00:39:41.513685 systemd[1]: Started cri-containerd-d1d24e1c1f721e17d4569d79f7eb6067fc92eb453d929c3ceb60afaaf87bf9c8.scope - libcontainer container d1d24e1c1f721e17d4569d79f7eb6067fc92eb453d929c3ceb60afaaf87bf9c8. Mar 13 00:39:41.574234 containerd[1555]: time="2026-03-13T00:39:41.574146401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"07b809bdfa869af2d8577e29b530296641bf03d4e308542be4edda13026dd966\"" Mar 13 00:39:41.575744 kubelet[2339]: E0313 00:39:41.575667 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.576837 containerd[1555]: time="2026-03-13T00:39:41.576774266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda6d84d38ecbadc12dbb6ba9a315f74,Namespace:kube-system,Attempt:0,} returns sandbox id \"d1d24e1c1f721e17d4569d79f7eb6067fc92eb453d929c3ceb60afaaf87bf9c8\"" Mar 13 00:39:41.577671 kubelet[2339]: E0313 00:39:41.577643 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.581462 containerd[1555]: time="2026-03-13T00:39:41.581347560Z" level=info msg="CreateContainer within sandbox \"07b809bdfa869af2d8577e29b530296641bf03d4e308542be4edda13026dd966\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:39:41.582855 containerd[1555]: time="2026-03-13T00:39:41.582796005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d1f7b1da2e9c29661a7f827bd8df0e289b2420090754d6db046fe48166fb99b\"" Mar 13 00:39:41.583622 containerd[1555]: time="2026-03-13T00:39:41.583570468Z" level=info msg="CreateContainer within sandbox \"d1d24e1c1f721e17d4569d79f7eb6067fc92eb453d929c3ceb60afaaf87bf9c8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:39:41.584830 kubelet[2339]: E0313 00:39:41.584787 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.590102 containerd[1555]: time="2026-03-13T00:39:41.590020391Z" level=info msg="CreateContainer within sandbox \"3d1f7b1da2e9c29661a7f827bd8df0e289b2420090754d6db046fe48166fb99b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:39:41.591350 containerd[1555]: time="2026-03-13T00:39:41.591308024Z" level=info msg="Container 83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:41.598294 containerd[1555]: time="2026-03-13T00:39:41.598231274Z" level=info msg="Container 57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:41.610275 containerd[1555]: time="2026-03-13T00:39:41.610229298Z" level=info msg="CreateContainer within sandbox \"d1d24e1c1f721e17d4569d79f7eb6067fc92eb453d929c3ceb60afaaf87bf9c8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073\"" Mar 13 00:39:41.610684 containerd[1555]: time="2026-03-13T00:39:41.610574133Z" level=info msg="Container 75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:41.610935 containerd[1555]: time="2026-03-13T00:39:41.610877650Z" level=info msg="StartContainer for \"57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073\"" Mar 13 00:39:41.612264 containerd[1555]: time="2026-03-13T00:39:41.612228180Z" level=info msg="connecting to shim 57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073" address="unix:///run/containerd/s/62a7b6469d0018780147c2bb65357e48c7658d8c27005911e7ef609510fa1e34" protocol=ttrpc version=3 Mar 13 00:39:41.614399 containerd[1555]: time="2026-03-13T00:39:41.614366929Z" level=info msg="CreateContainer within sandbox \"07b809bdfa869af2d8577e29b530296641bf03d4e308542be4edda13026dd966\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e\"" Mar 13 00:39:41.615001 containerd[1555]: time="2026-03-13T00:39:41.614980983Z" level=info msg="StartContainer for \"83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e\"" Mar 13 00:39:41.616534 containerd[1555]: time="2026-03-13T00:39:41.616454467Z" level=info msg="connecting to shim 83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e" address="unix:///run/containerd/s/656c99b1db991db8d0c32c46389962d8544df8505eecc85cfea775b17cbf660c" protocol=ttrpc version=3 Mar 13 00:39:41.618153 containerd[1555]: time="2026-03-13T00:39:41.618053571Z" level=info msg="CreateContainer within sandbox \"3d1f7b1da2e9c29661a7f827bd8df0e289b2420090754d6db046fe48166fb99b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5\"" Mar 13 00:39:41.619618 containerd[1555]: time="2026-03-13T00:39:41.619585256Z" level=info msg="StartContainer for \"75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5\"" Mar 13 00:39:41.621237 containerd[1555]: time="2026-03-13T00:39:41.621095364Z" level=info msg="connecting to shim 75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5" address="unix:///run/containerd/s/2a4adf934cb2bf88ff6abc5ce798e4eea78decf1497d48adf55cbae95245b4e6" protocol=ttrpc version=3 Mar 13 00:39:41.632562 systemd[1]: Started cri-containerd-57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073.scope - libcontainer container 57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073. Mar 13 00:39:41.642377 kubelet[2339]: I0313 00:39:41.642326 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:39:41.642966 kubelet[2339]: E0313 00:39:41.642931 2339 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.55:6443/api/v1/nodes\": dial tcp 10.0.0.55:6443: connect: connection refused" node="localhost" Mar 13 00:39:41.643594 systemd[1]: Started cri-containerd-83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e.scope - libcontainer container 83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e. Mar 13 00:39:41.647291 systemd[1]: Started cri-containerd-75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5.scope - libcontainer container 75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5. Mar 13 00:39:41.660536 kubelet[2339]: E0313 00:39:41.660473 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.55:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:39:41.708015 containerd[1555]: time="2026-03-13T00:39:41.707931083Z" level=info msg="StartContainer for \"57f8fb14e1a6076d7c2a59289cccc392940e800d903917cb25f5547f500e7073\" returns successfully" Mar 13 00:39:41.724445 containerd[1555]: time="2026-03-13T00:39:41.723925063Z" level=info msg="StartContainer for \"75779457ca1d42b727f744470e0c54e18877e3578e80081ffaa4bd17c8932aa5\" returns successfully" Mar 13 00:39:41.730274 containerd[1555]: time="2026-03-13T00:39:41.730196631Z" level=info msg="StartContainer for \"83c56e83c4279f3470f590f0c6256f000f3748e66f495dadb9b67842977f654e\" returns successfully" Mar 13 00:39:41.753347 kubelet[2339]: E0313 00:39:41.753286 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.55:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:39:41.795521 kubelet[2339]: E0313 00:39:41.795329 2339 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.55:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.55:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:39:41.807592 kubelet[2339]: E0313 00:39:41.807546 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:41.807916 kubelet[2339]: E0313 00:39:41.807672 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.811328 kubelet[2339]: E0313 00:39:41.811256 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:41.811478 kubelet[2339]: E0313 00:39:41.811394 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:41.815848 kubelet[2339]: E0313 00:39:41.815625 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:41.815848 kubelet[2339]: E0313 00:39:41.815711 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:42.448454 kubelet[2339]: I0313 00:39:42.447367 2339 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:39:42.818198 kubelet[2339]: E0313 00:39:42.818069 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:42.818352 kubelet[2339]: E0313 00:39:42.818209 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:42.819292 kubelet[2339]: E0313 00:39:42.819221 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:42.819463 kubelet[2339]: E0313 00:39:42.819361 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:43.148743 kubelet[2339]: E0313 00:39:43.148607 2339 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 13 00:39:43.233147 kubelet[2339]: I0313 00:39:43.233066 2339 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:39:43.233147 kubelet[2339]: E0313 00:39:43.233113 2339 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 13 00:39:43.243556 kubelet[2339]: E0313 00:39:43.243485 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.344588 kubelet[2339]: E0313 00:39:43.344529 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.444938 kubelet[2339]: E0313 00:39:43.444893 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.545895 kubelet[2339]: E0313 00:39:43.545788 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.646847 kubelet[2339]: E0313 00:39:43.646781 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.748078 kubelet[2339]: E0313 00:39:43.747859 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.820212 kubelet[2339]: E0313 00:39:43.820167 2339 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 13 00:39:43.820364 kubelet[2339]: E0313 00:39:43.820313 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:43.848581 kubelet[2339]: E0313 00:39:43.848508 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:43.948992 kubelet[2339]: E0313 00:39:43.948852 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.049496 kubelet[2339]: E0313 00:39:44.049041 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.149903 kubelet[2339]: E0313 00:39:44.149841 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.250244 kubelet[2339]: E0313 00:39:44.250201 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.350885 kubelet[2339]: E0313 00:39:44.350682 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.451728 kubelet[2339]: E0313 00:39:44.451593 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.552594 kubelet[2339]: E0313 00:39:44.552473 2339 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 13 00:39:44.668054 kubelet[2339]: I0313 00:39:44.667755 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:44.680210 kubelet[2339]: I0313 00:39:44.679767 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:44.687326 kubelet[2339]: I0313 00:39:44.687284 2339 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:44.752365 kubelet[2339]: I0313 00:39:44.752294 2339 apiserver.go:52] "Watching apiserver" Mar 13 00:39:44.755938 kubelet[2339]: E0313 00:39:44.755283 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:44.755938 kubelet[2339]: E0313 00:39:44.755320 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:44.768684 kubelet[2339]: I0313 00:39:44.768656 2339 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:39:44.820894 kubelet[2339]: E0313 00:39:44.820826 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:45.325199 kubelet[2339]: E0313 00:39:45.325109 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:45.485693 systemd[1]: Reload requested from client PID 2618 ('systemctl') (unit session-7.scope)... Mar 13 00:39:45.485723 systemd[1]: Reloading... Mar 13 00:39:45.571516 zram_generator::config[2661]: No configuration found. Mar 13 00:39:45.632973 kubelet[2339]: E0313 00:39:45.632807 2339 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:45.810739 systemd[1]: Reloading finished in 324 ms. Mar 13 00:39:45.844391 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:45.869708 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:39:45.870223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:45.870311 systemd[1]: kubelet.service: Consumed 1.214s CPU time, 131.5M memory peak. Mar 13 00:39:45.873550 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:39:46.119755 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:39:46.131855 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:39:46.182616 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:39:46.182616 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:39:46.182616 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:39:46.183010 kubelet[2706]: I0313 00:39:46.182617 2706 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:39:46.190052 kubelet[2706]: I0313 00:39:46.189997 2706 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:39:46.190052 kubelet[2706]: I0313 00:39:46.190033 2706 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:39:46.192301 kubelet[2706]: I0313 00:39:46.191847 2706 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:39:46.194889 kubelet[2706]: I0313 00:39:46.194844 2706 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:39:46.197358 kubelet[2706]: I0313 00:39:46.197326 2706 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:39:46.207311 kubelet[2706]: I0313 00:39:46.207268 2706 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:39:46.213797 kubelet[2706]: I0313 00:39:46.213681 2706 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:39:46.214010 kubelet[2706]: I0313 00:39:46.213949 2706 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:39:46.214185 kubelet[2706]: I0313 00:39:46.213995 2706 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:39:46.214185 kubelet[2706]: I0313 00:39:46.214182 2706 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:39:46.214185 kubelet[2706]: I0313 00:39:46.214192 2706 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:39:46.214604 kubelet[2706]: I0313 00:39:46.214252 2706 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:46.214604 kubelet[2706]: I0313 00:39:46.214542 2706 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:39:46.214604 kubelet[2706]: I0313 00:39:46.214557 2706 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:39:46.214604 kubelet[2706]: I0313 00:39:46.214582 2706 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:39:46.214604 kubelet[2706]: I0313 00:39:46.214598 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:39:46.219301 kubelet[2706]: I0313 00:39:46.218919 2706 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:39:46.220126 kubelet[2706]: I0313 00:39:46.220079 2706 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:39:46.228704 kubelet[2706]: I0313 00:39:46.228080 2706 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:39:46.228704 kubelet[2706]: I0313 00:39:46.228144 2706 server.go:1289] "Started kubelet" Mar 13 00:39:46.230453 kubelet[2706]: I0313 00:39:46.229930 2706 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:39:46.230453 kubelet[2706]: I0313 00:39:46.230290 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:39:46.231391 kubelet[2706]: I0313 00:39:46.230753 2706 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:39:46.232338 kubelet[2706]: I0313 00:39:46.232303 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:39:46.232776 kubelet[2706]: I0313 00:39:46.232702 2706 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:39:46.233193 kubelet[2706]: I0313 00:39:46.233171 2706 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:39:46.237439 kubelet[2706]: I0313 00:39:46.237361 2706 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:39:46.237972 kubelet[2706]: I0313 00:39:46.237545 2706 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:39:46.237972 kubelet[2706]: I0313 00:39:46.237924 2706 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:39:46.238341 kubelet[2706]: I0313 00:39:46.238268 2706 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:39:46.238443 kubelet[2706]: I0313 00:39:46.238374 2706 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:39:46.238804 kubelet[2706]: E0313 00:39:46.238755 2706 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:39:46.240809 kubelet[2706]: I0313 00:39:46.240716 2706 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:39:46.258121 kubelet[2706]: I0313 00:39:46.258092 2706 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:39:46.260379 kubelet[2706]: I0313 00:39:46.260317 2706 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:39:46.261181 kubelet[2706]: I0313 00:39:46.261119 2706 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:39:46.261217 kubelet[2706]: I0313 00:39:46.261200 2706 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:39:46.261217 kubelet[2706]: I0313 00:39:46.261209 2706 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:39:46.262138 kubelet[2706]: E0313 00:39:46.262068 2706 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:39:46.300691 kubelet[2706]: I0313 00:39:46.300628 2706 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:39:46.300691 kubelet[2706]: I0313 00:39:46.300662 2706 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:39:46.300691 kubelet[2706]: I0313 00:39:46.300683 2706 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:39:46.300866 kubelet[2706]: I0313 00:39:46.300799 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:39:46.300866 kubelet[2706]: I0313 00:39:46.300809 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:39:46.300866 kubelet[2706]: I0313 00:39:46.300824 2706 policy_none.go:49] "None policy: Start" Mar 13 00:39:46.300866 kubelet[2706]: I0313 00:39:46.300834 2706 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:39:46.300866 kubelet[2706]: I0313 00:39:46.300844 2706 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:39:46.300969 kubelet[2706]: I0313 00:39:46.300919 2706 state_mem.go:75] "Updated machine memory state" Mar 13 00:39:46.306188 kubelet[2706]: E0313 00:39:46.306135 2706 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:39:46.306383 kubelet[2706]: I0313 00:39:46.306308 2706 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:39:46.306383 kubelet[2706]: I0313 00:39:46.306338 2706 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:39:46.307994 kubelet[2706]: I0313 00:39:46.307862 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:39:46.309524 kubelet[2706]: E0313 00:39:46.309475 2706 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:39:46.363806 kubelet[2706]: I0313 00:39:46.363701 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.363978 kubelet[2706]: I0313 00:39:46.363964 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:46.364158 kubelet[2706]: I0313 00:39:46.363983 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:46.372377 kubelet[2706]: E0313 00:39:46.372168 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.372699 kubelet[2706]: E0313 00:39:46.372555 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:46.372699 kubelet[2706]: E0313 00:39:46.372572 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:46.414193 kubelet[2706]: I0313 00:39:46.414163 2706 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 13 00:39:46.425043 kubelet[2706]: I0313 00:39:46.425011 2706 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 13 00:39:46.425130 kubelet[2706]: I0313 00:39:46.425093 2706 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 13 00:39:46.538798 kubelet[2706]: I0313 00:39:46.538679 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.538798 kubelet[2706]: I0313 00:39:46.538733 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.538798 kubelet[2706]: I0313 00:39:46.538754 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:46.538798 kubelet[2706]: I0313 00:39:46.538801 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda6d84d38ecbadc12dbb6ba9a315f74-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda6d84d38ecbadc12dbb6ba9a315f74\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:46.539048 kubelet[2706]: I0313 00:39:46.538818 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda6d84d38ecbadc12dbb6ba9a315f74-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda6d84d38ecbadc12dbb6ba9a315f74\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:46.539048 kubelet[2706]: I0313 00:39:46.538875 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.539048 kubelet[2706]: I0313 00:39:46.538961 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.539048 kubelet[2706]: I0313 00:39:46.539003 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:46.539048 kubelet[2706]: I0313 00:39:46.539029 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda6d84d38ecbadc12dbb6ba9a315f74-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda6d84d38ecbadc12dbb6ba9a315f74\") " pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:46.673964 kubelet[2706]: E0313 00:39:46.673464 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:46.673964 kubelet[2706]: E0313 00:39:46.673522 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:46.673964 kubelet[2706]: E0313 00:39:46.673781 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:47.215466 kubelet[2706]: I0313 00:39:47.215343 2706 apiserver.go:52] "Watching apiserver" Mar 13 00:39:47.238949 kubelet[2706]: I0313 00:39:47.238876 2706 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:39:47.281490 kubelet[2706]: I0313 00:39:47.280820 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:47.281490 kubelet[2706]: I0313 00:39:47.281257 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:47.283761 kubelet[2706]: I0313 00:39:47.283721 2706 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:47.293882 kubelet[2706]: E0313 00:39:47.293821 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 13 00:39:47.294153 kubelet[2706]: E0313 00:39:47.294021 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:47.296539 kubelet[2706]: E0313 00:39:47.295145 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 13 00:39:47.296539 kubelet[2706]: E0313 00:39:47.295445 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:47.296539 kubelet[2706]: E0313 00:39:47.295924 2706 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 13 00:39:47.298504 kubelet[2706]: E0313 00:39:47.296925 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:47.323118 kubelet[2706]: I0313 00:39:47.323028 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.323011535 podStartE2EDuration="3.323011535s" podCreationTimestamp="2026-03-13 00:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:47.321632893 +0000 UTC m=+1.183623763" watchObservedRunningTime="2026-03-13 00:39:47.323011535 +0000 UTC m=+1.185002403" Mar 13 00:39:47.344122 kubelet[2706]: I0313 00:39:47.344015 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.343997116 podStartE2EDuration="3.343997116s" podCreationTimestamp="2026-03-13 00:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:47.343260591 +0000 UTC m=+1.205251460" watchObservedRunningTime="2026-03-13 00:39:47.343997116 +0000 UTC m=+1.205987985" Mar 13 00:39:47.344333 kubelet[2706]: I0313 00:39:47.344147 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.344142716 podStartE2EDuration="3.344142716s" podCreationTimestamp="2026-03-13 00:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:47.330968654 +0000 UTC m=+1.192959523" watchObservedRunningTime="2026-03-13 00:39:47.344142716 +0000 UTC m=+1.206133585" Mar 13 00:39:48.282451 kubelet[2706]: E0313 00:39:48.282384 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:48.282872 kubelet[2706]: E0313 00:39:48.282633 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:48.283085 kubelet[2706]: E0313 00:39:48.283008 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:49.644814 kubelet[2706]: E0313 00:39:49.644738 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:53.601140 kubelet[2706]: I0313 00:39:53.601038 2706 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:39:53.661610 containerd[1555]: time="2026-03-13T00:39:53.620967455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:39:53.707691 kubelet[2706]: I0313 00:39:53.706624 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:39:54.370672 kubelet[2706]: E0313 00:39:54.370600 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:54.371625 kubelet[2706]: E0313 00:39:54.371561 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:54.414692 systemd[1]: Created slice kubepods-besteffort-pod0a9ba69d_b2a0_40a5_a7bd_2ad5791d899c.slice - libcontainer container kubepods-besteffort-pod0a9ba69d_b2a0_40a5_a7bd_2ad5791d899c.slice. Mar 13 00:39:54.500480 systemd[1]: Created slice kubepods-besteffort-podbd0cf164_3df2_4bdb_8052_f249a37e6157.slice - libcontainer container kubepods-besteffort-podbd0cf164_3df2_4bdb_8052_f249a37e6157.slice. Mar 13 00:39:54.558012 kubelet[2706]: I0313 00:39:54.557919 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c-kube-proxy\") pod \"kube-proxy-z2qqh\" (UID: \"0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c\") " pod="kube-system/kube-proxy-z2qqh" Mar 13 00:39:54.558012 kubelet[2706]: I0313 00:39:54.557984 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c-lib-modules\") pod \"kube-proxy-z2qqh\" (UID: \"0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c\") " pod="kube-system/kube-proxy-z2qqh" Mar 13 00:39:54.558012 kubelet[2706]: I0313 00:39:54.558007 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c-xtables-lock\") pod \"kube-proxy-z2qqh\" (UID: \"0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c\") " pod="kube-system/kube-proxy-z2qqh" Mar 13 00:39:54.558251 kubelet[2706]: I0313 00:39:54.558048 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grxhg\" (UniqueName: \"kubernetes.io/projected/0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c-kube-api-access-grxhg\") pod \"kube-proxy-z2qqh\" (UID: \"0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c\") " pod="kube-system/kube-proxy-z2qqh" Mar 13 00:39:54.660269 kubelet[2706]: I0313 00:39:54.659100 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bd0cf164-3df2-4bdb-8052-f249a37e6157-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-trhbw\" (UID: \"bd0cf164-3df2-4bdb-8052-f249a37e6157\") " pod="tigera-operator/tigera-operator-6bf85f8dd-trhbw" Mar 13 00:39:54.660269 kubelet[2706]: I0313 00:39:54.659384 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4twt\" (UniqueName: \"kubernetes.io/projected/bd0cf164-3df2-4bdb-8052-f249a37e6157-kube-api-access-g4twt\") pod \"tigera-operator-6bf85f8dd-trhbw\" (UID: \"bd0cf164-3df2-4bdb-8052-f249a37e6157\") " pod="tigera-operator/tigera-operator-6bf85f8dd-trhbw" Mar 13 00:39:54.727525 kubelet[2706]: E0313 00:39:54.727292 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:54.730703 containerd[1555]: time="2026-03-13T00:39:54.730188536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2qqh,Uid:0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c,Namespace:kube-system,Attempt:0,}" Mar 13 00:39:54.774689 containerd[1555]: time="2026-03-13T00:39:54.774602039Z" level=info msg="connecting to shim 9558fa23082c63c73533eaeaf55a473e2b236010b1e05d65a79fbbf16d0a3386" address="unix:///run/containerd/s/a4069a961103a835a8d684286cfed7519d6501be04f6aec4a0feee515f8aebee" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:54.811529 containerd[1555]: time="2026-03-13T00:39:54.810982130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-trhbw,Uid:bd0cf164-3df2-4bdb-8052-f249a37e6157,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:39:54.829724 systemd[1]: Started cri-containerd-9558fa23082c63c73533eaeaf55a473e2b236010b1e05d65a79fbbf16d0a3386.scope - libcontainer container 9558fa23082c63c73533eaeaf55a473e2b236010b1e05d65a79fbbf16d0a3386. Mar 13 00:39:54.843138 containerd[1555]: time="2026-03-13T00:39:54.843008303Z" level=info msg="connecting to shim 9a2935a6fd02db3b18a7c0bd7b4900b817745a5aa0bd0a43af6020af43f3f7be" address="unix:///run/containerd/s/1f8bbbaabfaca349c6307b1fc2ceaff9197d24979e7f5cc2c169818502f36e16" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:39:55.032813 containerd[1555]: time="2026-03-13T00:39:55.032704398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z2qqh,Uid:0a9ba69d-b2a0-40a5-a7bd-2ad5791d899c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9558fa23082c63c73533eaeaf55a473e2b236010b1e05d65a79fbbf16d0a3386\"" Mar 13 00:39:55.035891 kubelet[2706]: E0313 00:39:55.035825 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:55.043039 containerd[1555]: time="2026-03-13T00:39:55.042954330Z" level=info msg="CreateContainer within sandbox \"9558fa23082c63c73533eaeaf55a473e2b236010b1e05d65a79fbbf16d0a3386\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:39:55.060659 systemd[1]: Started cri-containerd-9a2935a6fd02db3b18a7c0bd7b4900b817745a5aa0bd0a43af6020af43f3f7be.scope - libcontainer container 9a2935a6fd02db3b18a7c0bd7b4900b817745a5aa0bd0a43af6020af43f3f7be. Mar 13 00:39:55.068264 containerd[1555]: time="2026-03-13T00:39:55.067577634Z" level=info msg="Container 04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:55.083909 containerd[1555]: time="2026-03-13T00:39:55.083769055Z" level=info msg="CreateContainer within sandbox \"9558fa23082c63c73533eaeaf55a473e2b236010b1e05d65a79fbbf16d0a3386\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30\"" Mar 13 00:39:55.084806 containerd[1555]: time="2026-03-13T00:39:55.084707374Z" level=info msg="StartContainer for \"04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30\"" Mar 13 00:39:55.087093 containerd[1555]: time="2026-03-13T00:39:55.087041200Z" level=info msg="connecting to shim 04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30" address="unix:///run/containerd/s/a4069a961103a835a8d684286cfed7519d6501be04f6aec4a0feee515f8aebee" protocol=ttrpc version=3 Mar 13 00:39:55.150652 systemd[1]: Started cri-containerd-04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30.scope - libcontainer container 04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30. Mar 13 00:39:55.154341 containerd[1555]: time="2026-03-13T00:39:55.154241740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-trhbw,Uid:bd0cf164-3df2-4bdb-8052-f249a37e6157,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9a2935a6fd02db3b18a7c0bd7b4900b817745a5aa0bd0a43af6020af43f3f7be\"" Mar 13 00:39:55.156337 containerd[1555]: time="2026-03-13T00:39:55.156285959Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:39:55.264100 containerd[1555]: time="2026-03-13T00:39:55.264050746Z" level=info msg="StartContainer for \"04d6066965e52b3f7c00044f0db1b55f5cd18b55b5b755a98eddbec57e89af30\" returns successfully" Mar 13 00:39:55.374374 kubelet[2706]: E0313 00:39:55.374053 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:55.374945 kubelet[2706]: E0313 00:39:55.374758 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:55.375072 kubelet[2706]: E0313 00:39:55.375034 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:56.251369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount346414389.mount: Deactivated successfully. Mar 13 00:39:56.375330 kubelet[2706]: E0313 00:39:56.375225 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:39:57.419769 containerd[1555]: time="2026-03-13T00:39:57.419667379Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:57.420649 containerd[1555]: time="2026-03-13T00:39:57.420596585Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:39:57.421861 containerd[1555]: time="2026-03-13T00:39:57.421808276Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:57.424167 containerd[1555]: time="2026-03-13T00:39:57.424108728Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:39:57.425014 containerd[1555]: time="2026-03-13T00:39:57.424927575Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.268547568s" Mar 13 00:39:57.425014 containerd[1555]: time="2026-03-13T00:39:57.424971190Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:39:57.429315 containerd[1555]: time="2026-03-13T00:39:57.429202247Z" level=info msg="CreateContainer within sandbox \"9a2935a6fd02db3b18a7c0bd7b4900b817745a5aa0bd0a43af6020af43f3f7be\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:39:57.436956 containerd[1555]: time="2026-03-13T00:39:57.436904588Z" level=info msg="Container f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:39:57.443948 containerd[1555]: time="2026-03-13T00:39:57.443913275Z" level=info msg="CreateContainer within sandbox \"9a2935a6fd02db3b18a7c0bd7b4900b817745a5aa0bd0a43af6020af43f3f7be\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e\"" Mar 13 00:39:57.444485 containerd[1555]: time="2026-03-13T00:39:57.444357495Z" level=info msg="StartContainer for \"f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e\"" Mar 13 00:39:57.445676 containerd[1555]: time="2026-03-13T00:39:57.445628948Z" level=info msg="connecting to shim f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e" address="unix:///run/containerd/s/1f8bbbaabfaca349c6307b1fc2ceaff9197d24979e7f5cc2c169818502f36e16" protocol=ttrpc version=3 Mar 13 00:39:57.508629 systemd[1]: Started cri-containerd-f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e.scope - libcontainer container f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e. Mar 13 00:39:57.554185 containerd[1555]: time="2026-03-13T00:39:57.554099436Z" level=info msg="StartContainer for \"f60ca5ac62f4273b69e9814614b2b19bd57d973e82becbeb68c6e90b151bd86e\" returns successfully" Mar 13 00:39:57.924032 update_engine[1540]: I20260313 00:39:57.923897 1540 update_attempter.cc:509] Updating boot flags... Mar 13 00:39:58.389385 kubelet[2706]: I0313 00:39:58.389265 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z2qqh" podStartSLOduration=5.38922881 podStartE2EDuration="5.38922881s" podCreationTimestamp="2026-03-13 00:39:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:39:55.399104856 +0000 UTC m=+9.261095725" watchObservedRunningTime="2026-03-13 00:39:58.38922881 +0000 UTC m=+12.251219680" Mar 13 00:39:58.389385 kubelet[2706]: I0313 00:39:58.389379 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-trhbw" podStartSLOduration=2.119125539 podStartE2EDuration="4.389373403s" podCreationTimestamp="2026-03-13 00:39:54 +0000 UTC" firstStartedPulling="2026-03-13 00:39:55.155642318 +0000 UTC m=+9.017633187" lastFinishedPulling="2026-03-13 00:39:57.425890182 +0000 UTC m=+11.287881051" observedRunningTime="2026-03-13 00:39:58.389121283 +0000 UTC m=+12.251112162" watchObservedRunningTime="2026-03-13 00:39:58.389373403 +0000 UTC m=+12.251364271" Mar 13 00:39:59.654465 kubelet[2706]: E0313 00:39:59.654385 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:03.501942 sudo[1761]: pam_unix(sudo:session): session closed for user root Mar 13 00:40:03.506957 sshd[1760]: Connection closed by 10.0.0.1 port 58572 Mar 13 00:40:03.509524 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:03.522494 systemd-logind[1537]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:40:03.523727 systemd[1]: sshd@6-10.0.0.55:22-10.0.0.1:58572.service: Deactivated successfully. Mar 13 00:40:03.530066 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:40:03.531655 systemd[1]: session-7.scope: Consumed 8.245s CPU time, 232.1M memory peak. Mar 13 00:40:03.536969 systemd-logind[1537]: Removed session 7. Mar 13 00:40:05.761538 systemd[1]: Created slice kubepods-besteffort-pod41acc697_d5c5_4c41_b557_51410828bb64.slice - libcontainer container kubepods-besteffort-pod41acc697_d5c5_4c41_b557_51410828bb64.slice. Mar 13 00:40:05.824988 systemd[1]: Created slice kubepods-besteffort-pod4adb2a18_6144_449c_8202_0e18f7c20086.slice - libcontainer container kubepods-besteffort-pod4adb2a18_6144_449c_8202_0e18f7c20086.slice. Mar 13 00:40:05.848054 kubelet[2706]: I0313 00:40:05.847967 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/41acc697-d5c5-4c41-b557-51410828bb64-typha-certs\") pod \"calico-typha-5fd545ffd6-x899k\" (UID: \"41acc697-d5c5-4c41-b557-51410828bb64\") " pod="calico-system/calico-typha-5fd545ffd6-x899k" Mar 13 00:40:05.849014 kubelet[2706]: I0313 00:40:05.848064 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-flexvol-driver-host\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849014 kubelet[2706]: I0313 00:40:05.848136 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-policysync\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849014 kubelet[2706]: I0313 00:40:05.848157 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-sys-fs\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849014 kubelet[2706]: I0313 00:40:05.848178 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-var-lib-calico\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849014 kubelet[2706]: I0313 00:40:05.848200 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41acc697-d5c5-4c41-b557-51410828bb64-tigera-ca-bundle\") pod \"calico-typha-5fd545ffd6-x899k\" (UID: \"41acc697-d5c5-4c41-b557-51410828bb64\") " pod="calico-system/calico-typha-5fd545ffd6-x899k" Mar 13 00:40:05.849919 kubelet[2706]: I0313 00:40:05.848263 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wz24d\" (UniqueName: \"kubernetes.io/projected/41acc697-d5c5-4c41-b557-51410828bb64-kube-api-access-wz24d\") pod \"calico-typha-5fd545ffd6-x899k\" (UID: \"41acc697-d5c5-4c41-b557-51410828bb64\") " pod="calico-system/calico-typha-5fd545ffd6-x899k" Mar 13 00:40:05.849919 kubelet[2706]: I0313 00:40:05.848285 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-cni-net-dir\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849919 kubelet[2706]: I0313 00:40:05.848305 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-nodeproc\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849919 kubelet[2706]: I0313 00:40:05.848492 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-var-run-calico\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.849919 kubelet[2706]: I0313 00:40:05.848591 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-xtables-lock\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850080 kubelet[2706]: I0313 00:40:05.848616 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-cni-bin-dir\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850080 kubelet[2706]: I0313 00:40:05.848639 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-bpffs\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850080 kubelet[2706]: I0313 00:40:05.848661 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-lib-modules\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850080 kubelet[2706]: I0313 00:40:05.848683 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6rdf\" (UniqueName: \"kubernetes.io/projected/4adb2a18-6144-449c-8202-0e18f7c20086-kube-api-access-w6rdf\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850080 kubelet[2706]: I0313 00:40:05.848733 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4adb2a18-6144-449c-8202-0e18f7c20086-cni-log-dir\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850251 kubelet[2706]: I0313 00:40:05.848802 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adb2a18-6144-449c-8202-0e18f7c20086-tigera-ca-bundle\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.850251 kubelet[2706]: I0313 00:40:05.848840 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4adb2a18-6144-449c-8202-0e18f7c20086-node-certs\") pod \"calico-node-p4mvp\" (UID: \"4adb2a18-6144-449c-8202-0e18f7c20086\") " pod="calico-system/calico-node-p4mvp" Mar 13 00:40:05.910355 kubelet[2706]: E0313 00:40:05.910250 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:05.950468 kubelet[2706]: I0313 00:40:05.949975 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ee00005b-f815-4ee6-a341-a1ca5e393fa9-kubelet-dir\") pod \"csi-node-driver-wxx9r\" (UID: \"ee00005b-f815-4ee6-a341-a1ca5e393fa9\") " pod="calico-system/csi-node-driver-wxx9r" Mar 13 00:40:05.950468 kubelet[2706]: I0313 00:40:05.950130 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ee00005b-f815-4ee6-a341-a1ca5e393fa9-socket-dir\") pod \"csi-node-driver-wxx9r\" (UID: \"ee00005b-f815-4ee6-a341-a1ca5e393fa9\") " pod="calico-system/csi-node-driver-wxx9r" Mar 13 00:40:05.950699 kubelet[2706]: I0313 00:40:05.950636 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ee00005b-f815-4ee6-a341-a1ca5e393fa9-registration-dir\") pod \"csi-node-driver-wxx9r\" (UID: \"ee00005b-f815-4ee6-a341-a1ca5e393fa9\") " pod="calico-system/csi-node-driver-wxx9r" Mar 13 00:40:05.950768 kubelet[2706]: I0313 00:40:05.950724 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ee00005b-f815-4ee6-a341-a1ca5e393fa9-varrun\") pod \"csi-node-driver-wxx9r\" (UID: \"ee00005b-f815-4ee6-a341-a1ca5e393fa9\") " pod="calico-system/csi-node-driver-wxx9r" Mar 13 00:40:05.950768 kubelet[2706]: I0313 00:40:05.950763 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5b6x\" (UniqueName: \"kubernetes.io/projected/ee00005b-f815-4ee6-a341-a1ca5e393fa9-kube-api-access-m5b6x\") pod \"csi-node-driver-wxx9r\" (UID: \"ee00005b-f815-4ee6-a341-a1ca5e393fa9\") " pod="calico-system/csi-node-driver-wxx9r" Mar 13 00:40:05.954378 kubelet[2706]: E0313 00:40:05.953211 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.954378 kubelet[2706]: W0313 00:40:05.953353 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.954378 kubelet[2706]: E0313 00:40:05.953489 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.954378 kubelet[2706]: E0313 00:40:05.953828 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.954378 kubelet[2706]: W0313 00:40:05.953837 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.954378 kubelet[2706]: E0313 00:40:05.953906 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.954378 kubelet[2706]: E0313 00:40:05.954302 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.954378 kubelet[2706]: W0313 00:40:05.954311 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.954378 kubelet[2706]: E0313 00:40:05.954319 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.955234 kubelet[2706]: E0313 00:40:05.954898 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.955234 kubelet[2706]: W0313 00:40:05.954907 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.955234 kubelet[2706]: E0313 00:40:05.954916 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.956999 kubelet[2706]: E0313 00:40:05.956948 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.956999 kubelet[2706]: W0313 00:40:05.956978 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.956999 kubelet[2706]: E0313 00:40:05.956990 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.957541 kubelet[2706]: E0313 00:40:05.957485 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.957541 kubelet[2706]: W0313 00:40:05.957503 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.957608 kubelet[2706]: E0313 00:40:05.957513 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.958218 kubelet[2706]: E0313 00:40:05.958137 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.958218 kubelet[2706]: W0313 00:40:05.958151 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.958218 kubelet[2706]: E0313 00:40:05.958160 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.959697 kubelet[2706]: E0313 00:40:05.959329 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.959697 kubelet[2706]: W0313 00:40:05.959343 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.959697 kubelet[2706]: E0313 00:40:05.959353 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.959784 kubelet[2706]: E0313 00:40:05.959741 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.959784 kubelet[2706]: W0313 00:40:05.959749 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.959784 kubelet[2706]: E0313 00:40:05.959758 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.960339 kubelet[2706]: E0313 00:40:05.960075 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.960339 kubelet[2706]: W0313 00:40:05.960087 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.960339 kubelet[2706]: E0313 00:40:05.960095 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.960537 kubelet[2706]: E0313 00:40:05.960452 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.960537 kubelet[2706]: W0313 00:40:05.960461 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.960537 kubelet[2706]: E0313 00:40:05.960469 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.961155 kubelet[2706]: E0313 00:40:05.960844 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.961155 kubelet[2706]: W0313 00:40:05.960870 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.961155 kubelet[2706]: E0313 00:40:05.960879 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.961453 kubelet[2706]: E0313 00:40:05.961352 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.961492 kubelet[2706]: W0313 00:40:05.961454 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.961492 kubelet[2706]: E0313 00:40:05.961466 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.963445 kubelet[2706]: E0313 00:40:05.961872 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.963445 kubelet[2706]: W0313 00:40:05.961883 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.963445 kubelet[2706]: E0313 00:40:05.961892 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.963637 kubelet[2706]: E0313 00:40:05.963602 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.963637 kubelet[2706]: W0313 00:40:05.963630 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.963686 kubelet[2706]: E0313 00:40:05.963640 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.964654 kubelet[2706]: E0313 00:40:05.964604 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.964654 kubelet[2706]: W0313 00:40:05.964631 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.964654 kubelet[2706]: E0313 00:40:05.964641 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.965716 kubelet[2706]: E0313 00:40:05.965667 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.965716 kubelet[2706]: W0313 00:40:05.965695 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.965716 kubelet[2706]: E0313 00:40:05.965705 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.969942 kubelet[2706]: E0313 00:40:05.969896 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.969942 kubelet[2706]: W0313 00:40:05.969925 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.969942 kubelet[2706]: E0313 00:40:05.969935 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.970342 kubelet[2706]: E0313 00:40:05.970297 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.970342 kubelet[2706]: W0313 00:40:05.970323 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.970342 kubelet[2706]: E0313 00:40:05.970332 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.970987 kubelet[2706]: E0313 00:40:05.970942 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.970987 kubelet[2706]: W0313 00:40:05.970970 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.970987 kubelet[2706]: E0313 00:40:05.970980 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:05.976010 kubelet[2706]: E0313 00:40:05.975954 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:05.976010 kubelet[2706]: W0313 00:40:05.975981 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:05.976010 kubelet[2706]: E0313 00:40:05.975992 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.052061 kubelet[2706]: E0313 00:40:06.051892 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.052061 kubelet[2706]: W0313 00:40:06.051927 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.052061 kubelet[2706]: E0313 00:40:06.051947 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.052198 kubelet[2706]: E0313 00:40:06.052183 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.052198 kubelet[2706]: W0313 00:40:06.052192 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.052272 kubelet[2706]: E0313 00:40:06.052200 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.052558 kubelet[2706]: E0313 00:40:06.052533 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.052558 kubelet[2706]: W0313 00:40:06.052555 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.052699 kubelet[2706]: E0313 00:40:06.052564 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.052973 kubelet[2706]: E0313 00:40:06.052811 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.052973 kubelet[2706]: W0313 00:40:06.052906 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.052973 kubelet[2706]: E0313 00:40:06.052916 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.053330 kubelet[2706]: E0313 00:40:06.053262 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.053330 kubelet[2706]: W0313 00:40:06.053316 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.053330 kubelet[2706]: E0313 00:40:06.053330 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.053825 kubelet[2706]: E0313 00:40:06.053794 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.053825 kubelet[2706]: W0313 00:40:06.053821 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.053908 kubelet[2706]: E0313 00:40:06.053833 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.054184 kubelet[2706]: E0313 00:40:06.054134 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.054184 kubelet[2706]: W0313 00:40:06.054160 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.054184 kubelet[2706]: E0313 00:40:06.054170 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.054564 kubelet[2706]: E0313 00:40:06.054540 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.054564 kubelet[2706]: W0313 00:40:06.054561 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.054672 kubelet[2706]: E0313 00:40:06.054570 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.055060 kubelet[2706]: E0313 00:40:06.055031 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.055060 kubelet[2706]: W0313 00:40:06.055052 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.055060 kubelet[2706]: E0313 00:40:06.055063 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.055442 kubelet[2706]: E0313 00:40:06.055356 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.055442 kubelet[2706]: W0313 00:40:06.055381 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.055442 kubelet[2706]: E0313 00:40:06.055399 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.056335 kubelet[2706]: E0313 00:40:06.056240 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.056335 kubelet[2706]: W0313 00:40:06.056274 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.056335 kubelet[2706]: E0313 00:40:06.056307 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.056677 kubelet[2706]: E0313 00:40:06.056644 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.056739 kubelet[2706]: W0313 00:40:06.056664 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.056739 kubelet[2706]: E0313 00:40:06.056726 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.057023 kubelet[2706]: E0313 00:40:06.056990 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.057023 kubelet[2706]: W0313 00:40:06.057012 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.057023 kubelet[2706]: E0313 00:40:06.057020 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.057571 kubelet[2706]: E0313 00:40:06.057500 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.057571 kubelet[2706]: W0313 00:40:06.057530 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.057571 kubelet[2706]: E0313 00:40:06.057540 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.057917 kubelet[2706]: E0313 00:40:06.057806 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.057917 kubelet[2706]: W0313 00:40:06.057836 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.057917 kubelet[2706]: E0313 00:40:06.057845 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.058142 kubelet[2706]: E0313 00:40:06.058104 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.058142 kubelet[2706]: W0313 00:40:06.058125 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.058142 kubelet[2706]: E0313 00:40:06.058136 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.058514 kubelet[2706]: E0313 00:40:06.058480 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.058514 kubelet[2706]: W0313 00:40:06.058511 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.058605 kubelet[2706]: E0313 00:40:06.058525 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.058794 kubelet[2706]: E0313 00:40:06.058761 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.058794 kubelet[2706]: W0313 00:40:06.058784 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.058883 kubelet[2706]: E0313 00:40:06.058834 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.059147 kubelet[2706]: E0313 00:40:06.059121 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.059147 kubelet[2706]: W0313 00:40:06.059141 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.059248 kubelet[2706]: E0313 00:40:06.059149 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.059399 kubelet[2706]: E0313 00:40:06.059383 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.059399 kubelet[2706]: W0313 00:40:06.059394 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.059545 kubelet[2706]: E0313 00:40:06.059402 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.059727 kubelet[2706]: E0313 00:40:06.059699 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.059727 kubelet[2706]: W0313 00:40:06.059721 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.059727 kubelet[2706]: E0313 00:40:06.059731 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.060181 kubelet[2706]: E0313 00:40:06.060029 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.060181 kubelet[2706]: W0313 00:40:06.060058 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.060181 kubelet[2706]: E0313 00:40:06.060069 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.060644 kubelet[2706]: E0313 00:40:06.060619 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.060644 kubelet[2706]: W0313 00:40:06.060642 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.060741 kubelet[2706]: E0313 00:40:06.060652 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.060975 kubelet[2706]: E0313 00:40:06.060951 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.060975 kubelet[2706]: W0313 00:40:06.060971 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.061059 kubelet[2706]: E0313 00:40:06.060980 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.061548 kubelet[2706]: E0313 00:40:06.061521 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.061548 kubelet[2706]: W0313 00:40:06.061543 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.061617 kubelet[2706]: E0313 00:40:06.061554 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.070186 kubelet[2706]: E0313 00:40:06.070141 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:06.071375 containerd[1555]: time="2026-03-13T00:40:06.071134628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fd545ffd6-x899k,Uid:41acc697-d5c5-4c41-b557-51410828bb64,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:06.072196 kubelet[2706]: E0313 00:40:06.072116 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:06.072196 kubelet[2706]: W0313 00:40:06.072128 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:06.072196 kubelet[2706]: E0313 00:40:06.072140 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:06.129898 containerd[1555]: time="2026-03-13T00:40:06.129835965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4mvp,Uid:4adb2a18-6144-449c-8202-0e18f7c20086,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:06.131082 containerd[1555]: time="2026-03-13T00:40:06.131014803Z" level=info msg="connecting to shim 694c1f6daf60c541aca04e0e9a983eb94d31104625b40740d992534d1b134263" address="unix:///run/containerd/s/e1ef43cb0548d95524851ed23a55ff3b12ed7cbeca179fe566899b5f5066917d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:06.163888 containerd[1555]: time="2026-03-13T00:40:06.163829694Z" level=info msg="connecting to shim 07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b" address="unix:///run/containerd/s/85a510b040fdaea8d28a3a33598b458f219c41c27d76b9cd18b0a8e0eba5a312" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:06.188904 systemd[1]: Started cri-containerd-694c1f6daf60c541aca04e0e9a983eb94d31104625b40740d992534d1b134263.scope - libcontainer container 694c1f6daf60c541aca04e0e9a983eb94d31104625b40740d992534d1b134263. Mar 13 00:40:06.240720 systemd[1]: Started cri-containerd-07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b.scope - libcontainer container 07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b. Mar 13 00:40:06.273884 containerd[1555]: time="2026-03-13T00:40:06.273827091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fd545ffd6-x899k,Uid:41acc697-d5c5-4c41-b557-51410828bb64,Namespace:calico-system,Attempt:0,} returns sandbox id \"694c1f6daf60c541aca04e0e9a983eb94d31104625b40740d992534d1b134263\"" Mar 13 00:40:06.275819 kubelet[2706]: E0313 00:40:06.275709 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:06.278343 containerd[1555]: time="2026-03-13T00:40:06.278288300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:40:06.297563 containerd[1555]: time="2026-03-13T00:40:06.297511578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-p4mvp,Uid:4adb2a18-6144-449c-8202-0e18f7c20086,Namespace:calico-system,Attempt:0,} returns sandbox id \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\"" Mar 13 00:40:07.263301 kubelet[2706]: E0313 00:40:07.261953 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:07.267153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241254015.mount: Deactivated successfully. Mar 13 00:40:09.085115 containerd[1555]: time="2026-03-13T00:40:09.084992027Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:09.085938 containerd[1555]: time="2026-03-13T00:40:09.085885738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:40:09.087198 containerd[1555]: time="2026-03-13T00:40:09.087162964Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:09.091132 containerd[1555]: time="2026-03-13T00:40:09.091064860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:09.091627 containerd[1555]: time="2026-03-13T00:40:09.091580428Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.813171021s" Mar 13 00:40:09.091627 containerd[1555]: time="2026-03-13T00:40:09.091621005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:40:09.093640 containerd[1555]: time="2026-03-13T00:40:09.093362748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:40:09.115580 containerd[1555]: time="2026-03-13T00:40:09.115517317Z" level=info msg="CreateContainer within sandbox \"694c1f6daf60c541aca04e0e9a983eb94d31104625b40740d992534d1b134263\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:40:09.127013 containerd[1555]: time="2026-03-13T00:40:09.126971215Z" level=info msg="Container ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:09.140557 containerd[1555]: time="2026-03-13T00:40:09.140470586Z" level=info msg="CreateContainer within sandbox \"694c1f6daf60c541aca04e0e9a983eb94d31104625b40740d992534d1b134263\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903\"" Mar 13 00:40:09.141470 containerd[1555]: time="2026-03-13T00:40:09.141376656Z" level=info msg="StartContainer for \"ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903\"" Mar 13 00:40:09.143611 containerd[1555]: time="2026-03-13T00:40:09.143494308Z" level=info msg="connecting to shim ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903" address="unix:///run/containerd/s/e1ef43cb0548d95524851ed23a55ff3b12ed7cbeca179fe566899b5f5066917d" protocol=ttrpc version=3 Mar 13 00:40:09.178673 systemd[1]: Started cri-containerd-ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903.scope - libcontainer container ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903. Mar 13 00:40:09.262698 kubelet[2706]: E0313 00:40:09.262652 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:09.309132 containerd[1555]: time="2026-03-13T00:40:09.308933683Z" level=info msg="StartContainer for \"ec841f86f145a04e6f3218704306315088616a4a9d2f265d8f5a8df427cd9903\" returns successfully" Mar 13 00:40:09.908862 containerd[1555]: time="2026-03-13T00:40:09.908540217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:09.909689 containerd[1555]: time="2026-03-13T00:40:09.909636292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:40:09.911020 containerd[1555]: time="2026-03-13T00:40:09.910949951Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:09.915276 containerd[1555]: time="2026-03-13T00:40:09.915139846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:09.916810 containerd[1555]: time="2026-03-13T00:40:09.916710221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 823.320334ms" Mar 13 00:40:09.916810 containerd[1555]: time="2026-03-13T00:40:09.916764499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:40:09.925466 containerd[1555]: time="2026-03-13T00:40:09.925257505Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:40:09.939350 kubelet[2706]: E0313 00:40:09.939236 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:09.944659 containerd[1555]: time="2026-03-13T00:40:09.944549718Z" level=info msg="Container 548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:09.955520 kubelet[2706]: E0313 00:40:09.954904 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.955520 kubelet[2706]: W0313 00:40:09.954926 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.955520 kubelet[2706]: E0313 00:40:09.954947 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.955520 kubelet[2706]: E0313 00:40:09.955469 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.955520 kubelet[2706]: W0313 00:40:09.955480 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.955520 kubelet[2706]: E0313 00:40:09.955491 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.955876 kubelet[2706]: E0313 00:40:09.955799 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.955911 kubelet[2706]: W0313 00:40:09.955876 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.955911 kubelet[2706]: E0313 00:40:09.955889 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.957039 kubelet[2706]: E0313 00:40:09.956840 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.957039 kubelet[2706]: W0313 00:40:09.956983 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.957039 kubelet[2706]: E0313 00:40:09.957000 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.958008 kubelet[2706]: E0313 00:40:09.957985 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.958008 kubelet[2706]: W0313 00:40:09.958001 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.958164 kubelet[2706]: E0313 00:40:09.958133 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.958638 kubelet[2706]: E0313 00:40:09.958521 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.958709 kubelet[2706]: W0313 00:40:09.958672 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.958709 kubelet[2706]: E0313 00:40:09.958687 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.959271 kubelet[2706]: E0313 00:40:09.959126 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.959271 kubelet[2706]: W0313 00:40:09.959143 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.959271 kubelet[2706]: E0313 00:40:09.959157 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.959691 kubelet[2706]: E0313 00:40:09.959674 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.959814 kubelet[2706]: W0313 00:40:09.959748 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.959814 kubelet[2706]: E0313 00:40:09.959762 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.960314 kubelet[2706]: E0313 00:40:09.960289 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.960314 kubelet[2706]: W0313 00:40:09.960311 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.960464 kubelet[2706]: E0313 00:40:09.960321 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.960739 kubelet[2706]: E0313 00:40:09.960709 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.960739 kubelet[2706]: W0313 00:40:09.960732 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.960815 kubelet[2706]: E0313 00:40:09.960742 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.960840 containerd[1555]: time="2026-03-13T00:40:09.960729372Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0\"" Mar 13 00:40:09.961114 kubelet[2706]: E0313 00:40:09.961077 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.961114 kubelet[2706]: W0313 00:40:09.961102 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.961264 kubelet[2706]: E0313 00:40:09.961120 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.961515 kubelet[2706]: E0313 00:40:09.961457 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.961698 kubelet[2706]: W0313 00:40:09.961599 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.961698 kubelet[2706]: E0313 00:40:09.961611 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.961948 containerd[1555]: time="2026-03-13T00:40:09.961564403Z" level=info msg="StartContainer for \"548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0\"" Mar 13 00:40:09.962069 kubelet[2706]: E0313 00:40:09.961877 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.962069 kubelet[2706]: W0313 00:40:09.961886 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.962069 kubelet[2706]: E0313 00:40:09.961895 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.962518 kubelet[2706]: E0313 00:40:09.962122 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.962518 kubelet[2706]: W0313 00:40:09.962136 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.962518 kubelet[2706]: E0313 00:40:09.962147 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.962762 kubelet[2706]: E0313 00:40:09.962688 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.962762 kubelet[2706]: W0313 00:40:09.962701 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.962762 kubelet[2706]: E0313 00:40:09.962717 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.964107 containerd[1555]: time="2026-03-13T00:40:09.964029789Z" level=info msg="connecting to shim 548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0" address="unix:///run/containerd/s/85a510b040fdaea8d28a3a33598b458f219c41c27d76b9cd18b0a8e0eba5a312" protocol=ttrpc version=3 Mar 13 00:40:09.989499 kubelet[2706]: E0313 00:40:09.989168 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.989499 kubelet[2706]: W0313 00:40:09.989195 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.989499 kubelet[2706]: E0313 00:40:09.989216 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.989781 kubelet[2706]: E0313 00:40:09.989738 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.989781 kubelet[2706]: W0313 00:40:09.989773 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.989917 kubelet[2706]: E0313 00:40:09.989788 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.990982 kubelet[2706]: E0313 00:40:09.990940 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.990982 kubelet[2706]: W0313 00:40:09.990977 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.991102 kubelet[2706]: E0313 00:40:09.990995 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.991632 kubelet[2706]: E0313 00:40:09.991528 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.991632 kubelet[2706]: W0313 00:40:09.991566 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.991632 kubelet[2706]: E0313 00:40:09.991617 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.992221 kubelet[2706]: E0313 00:40:09.992117 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.992221 kubelet[2706]: W0313 00:40:09.992199 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.992221 kubelet[2706]: E0313 00:40:09.992214 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.995357 kubelet[2706]: E0313 00:40:09.995283 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.995357 kubelet[2706]: W0313 00:40:09.995332 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.995545 kubelet[2706]: E0313 00:40:09.995364 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.996525 kubelet[2706]: E0313 00:40:09.996481 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.996525 kubelet[2706]: W0313 00:40:09.996515 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.996660 kubelet[2706]: E0313 00:40:09.996533 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.997124 kubelet[2706]: E0313 00:40:09.997054 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.997124 kubelet[2706]: W0313 00:40:09.997096 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.997124 kubelet[2706]: E0313 00:40:09.997111 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.997842 kubelet[2706]: E0313 00:40:09.997800 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.997842 kubelet[2706]: W0313 00:40:09.997832 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.997939 kubelet[2706]: E0313 00:40:09.997848 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.998386 kubelet[2706]: E0313 00:40:09.998343 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.998386 kubelet[2706]: W0313 00:40:09.998376 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.998561 kubelet[2706]: E0313 00:40:09.998391 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.998632 systemd[1]: Started cri-containerd-548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0.scope - libcontainer container 548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0. Mar 13 00:40:09.999060 kubelet[2706]: E0313 00:40:09.998989 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.999060 kubelet[2706]: W0313 00:40:09.999029 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:09.999060 kubelet[2706]: E0313 00:40:09.999044 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:09.999777 kubelet[2706]: E0313 00:40:09.999742 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:09.999777 kubelet[2706]: W0313 00:40:09.999763 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.000023 kubelet[2706]: E0313 00:40:09.999778 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.000270 kubelet[2706]: E0313 00:40:10.000232 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:10.000270 kubelet[2706]: W0313 00:40:10.000250 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.000270 kubelet[2706]: E0313 00:40:10.000265 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.001658 kubelet[2706]: E0313 00:40:10.001502 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:10.001658 kubelet[2706]: W0313 00:40:10.001538 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.001658 kubelet[2706]: E0313 00:40:10.001551 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.004846 kubelet[2706]: E0313 00:40:10.004805 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:10.004846 kubelet[2706]: W0313 00:40:10.004838 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.004965 kubelet[2706]: E0313 00:40:10.004854 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.005368 kubelet[2706]: E0313 00:40:10.005339 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:10.005368 kubelet[2706]: W0313 00:40:10.005353 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.005368 kubelet[2706]: E0313 00:40:10.005365 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.006574 kubelet[2706]: E0313 00:40:10.006505 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:10.006574 kubelet[2706]: W0313 00:40:10.006550 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.006574 kubelet[2706]: E0313 00:40:10.006564 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.006998 kubelet[2706]: E0313 00:40:10.006937 2706 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:40:10.006998 kubelet[2706]: W0313 00:40:10.006972 2706 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:40:10.006998 kubelet[2706]: E0313 00:40:10.006987 2706 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:40:10.122443 containerd[1555]: time="2026-03-13T00:40:10.122366251Z" level=info msg="StartContainer for \"548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0\" returns successfully" Mar 13 00:40:10.144852 systemd[1]: cri-containerd-548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0.scope: Deactivated successfully. Mar 13 00:40:10.150981 containerd[1555]: time="2026-03-13T00:40:10.150912091Z" level=info msg="received container exit event container_id:\"548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0\" id:\"548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0\" pid:3382 exited_at:{seconds:1773362410 nanos:149746097}" Mar 13 00:40:10.185038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-548cf8931e66eb92eaf07b4c40952f79407bba5b9e3fcb1757abb48209e6bfc0-rootfs.mount: Deactivated successfully. Mar 13 00:40:10.944273 kubelet[2706]: I0313 00:40:10.944221 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:10.945016 kubelet[2706]: E0313 00:40:10.944638 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:10.946327 containerd[1555]: time="2026-03-13T00:40:10.946154999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:40:10.962092 kubelet[2706]: I0313 00:40:10.962009 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fd545ffd6-x899k" podStartSLOduration=3.146581633 podStartE2EDuration="5.961924491s" podCreationTimestamp="2026-03-13 00:40:05 +0000 UTC" firstStartedPulling="2026-03-13 00:40:06.277935311 +0000 UTC m=+20.139926179" lastFinishedPulling="2026-03-13 00:40:09.093278168 +0000 UTC m=+22.955269037" observedRunningTime="2026-03-13 00:40:09.955926916 +0000 UTC m=+23.817917785" watchObservedRunningTime="2026-03-13 00:40:10.961924491 +0000 UTC m=+24.823915379" Mar 13 00:40:11.265000 kubelet[2706]: E0313 00:40:11.264622 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:13.262770 kubelet[2706]: E0313 00:40:13.262496 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:15.262058 kubelet[2706]: E0313 00:40:15.261918 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:17.264810 kubelet[2706]: E0313 00:40:17.264585 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:18.097865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527613842.mount: Deactivated successfully. Mar 13 00:40:18.160632 containerd[1555]: time="2026-03-13T00:40:18.160512622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:18.161511 containerd[1555]: time="2026-03-13T00:40:18.161371723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:40:18.162715 containerd[1555]: time="2026-03-13T00:40:18.162617252Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:18.166121 containerd[1555]: time="2026-03-13T00:40:18.166033612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:18.166733 containerd[1555]: time="2026-03-13T00:40:18.166627096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.220391684s" Mar 13 00:40:18.166733 containerd[1555]: time="2026-03-13T00:40:18.166674425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:40:18.173603 containerd[1555]: time="2026-03-13T00:40:18.173566451Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:40:18.200894 containerd[1555]: time="2026-03-13T00:40:18.200802469Z" level=info msg="Container f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:18.201948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257774924.mount: Deactivated successfully. Mar 13 00:40:18.255602 containerd[1555]: time="2026-03-13T00:40:18.255512567Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5\"" Mar 13 00:40:18.259465 containerd[1555]: time="2026-03-13T00:40:18.256287164Z" level=info msg="StartContainer for \"f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5\"" Mar 13 00:40:18.269928 containerd[1555]: time="2026-03-13T00:40:18.269870099Z" level=info msg="connecting to shim f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5" address="unix:///run/containerd/s/85a510b040fdaea8d28a3a33598b458f219c41c27d76b9cd18b0a8e0eba5a312" protocol=ttrpc version=3 Mar 13 00:40:18.326750 systemd[1]: Started cri-containerd-f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5.scope - libcontainer container f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5. Mar 13 00:40:18.500277 containerd[1555]: time="2026-03-13T00:40:18.500174384Z" level=info msg="StartContainer for \"f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5\" returns successfully" Mar 13 00:40:18.604368 systemd[1]: cri-containerd-f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5.scope: Deactivated successfully. Mar 13 00:40:18.607737 containerd[1555]: time="2026-03-13T00:40:18.607641518Z" level=info msg="received container exit event container_id:\"f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5\" id:\"f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5\" pid:3442 exited_at:{seconds:1773362418 nanos:606807959}" Mar 13 00:40:19.000372 containerd[1555]: time="2026-03-13T00:40:18.999997238Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:40:19.097485 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f60e29b6ddb5365be693d2fcf04f26428633d34ced9a97b46bc0e373acbb78e5-rootfs.mount: Deactivated successfully. Mar 13 00:40:19.262549 kubelet[2706]: E0313 00:40:19.262253 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:21.262808 kubelet[2706]: E0313 00:40:21.262696 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:22.178047 containerd[1555]: time="2026-03-13T00:40:22.177937798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:22.179104 containerd[1555]: time="2026-03-13T00:40:22.179036875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:40:22.184295 containerd[1555]: time="2026-03-13T00:40:22.184204174Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.184158123s" Mar 13 00:40:22.184295 containerd[1555]: time="2026-03-13T00:40:22.184272063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:40:22.186948 containerd[1555]: time="2026-03-13T00:40:22.186869572Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:22.187942 containerd[1555]: time="2026-03-13T00:40:22.187868583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:22.192388 containerd[1555]: time="2026-03-13T00:40:22.192105768Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:40:22.207453 containerd[1555]: time="2026-03-13T00:40:22.207311719Z" level=info msg="Container bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:22.224089 containerd[1555]: time="2026-03-13T00:40:22.223970795Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885\"" Mar 13 00:40:22.225340 containerd[1555]: time="2026-03-13T00:40:22.224893773Z" level=info msg="StartContainer for \"bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885\"" Mar 13 00:40:22.227545 containerd[1555]: time="2026-03-13T00:40:22.227507986Z" level=info msg="connecting to shim bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885" address="unix:///run/containerd/s/85a510b040fdaea8d28a3a33598b458f219c41c27d76b9cd18b0a8e0eba5a312" protocol=ttrpc version=3 Mar 13 00:40:22.278721 systemd[1]: Started cri-containerd-bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885.scope - libcontainer container bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885. Mar 13 00:40:22.420114 containerd[1555]: time="2026-03-13T00:40:22.420021799Z" level=info msg="StartContainer for \"bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885\" returns successfully" Mar 13 00:40:22.676198 kubelet[2706]: I0313 00:40:22.676138 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:22.677700 kubelet[2706]: E0313 00:40:22.677605 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:23.014725 kubelet[2706]: E0313 00:40:23.014485 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:23.262112 kubelet[2706]: E0313 00:40:23.261975 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wxx9r" podUID="ee00005b-f815-4ee6-a341-a1ca5e393fa9" Mar 13 00:40:23.351113 systemd[1]: cri-containerd-bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885.scope: Deactivated successfully. Mar 13 00:40:23.351626 systemd[1]: cri-containerd-bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885.scope: Consumed 996ms CPU time, 190.5M memory peak, 5.4M read from disk, 177M written to disk. Mar 13 00:40:23.376395 containerd[1555]: time="2026-03-13T00:40:23.376302612Z" level=info msg="received container exit event container_id:\"bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885\" id:\"bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885\" pid:3505 exited_at:{seconds:1773362423 nanos:375300759}" Mar 13 00:40:23.416161 kubelet[2706]: I0313 00:40:23.416126 2706 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:40:23.502985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc58c89c3faa4466bcdb4d4a9a8dee38f82fb1254193f0a37b934c8013cad885-rootfs.mount: Deactivated successfully. Mar 13 00:40:23.551546 systemd[1]: Created slice kubepods-burstable-pod8b4e81f6_f633_4e5a_940c_c9d165d3fd0e.slice - libcontainer container kubepods-burstable-pod8b4e81f6_f633_4e5a_940c_c9d165d3fd0e.slice. Mar 13 00:40:23.570200 systemd[1]: Created slice kubepods-besteffort-pod4271cba1_49ad_4316_b909_26a45f24a613.slice - libcontainer container kubepods-besteffort-pod4271cba1_49ad_4316_b909_26a45f24a613.slice. Mar 13 00:40:23.584383 systemd[1]: Created slice kubepods-besteffort-pod26c4e91b_29e2_464c_92bb_dfe00ec079cd.slice - libcontainer container kubepods-besteffort-pod26c4e91b_29e2_464c_92bb_dfe00ec079cd.slice. Mar 13 00:40:23.596020 systemd[1]: Created slice kubepods-besteffort-podb013b712_c24d_468f_86cc_2f4dbb3799a5.slice - libcontainer container kubepods-besteffort-podb013b712_c24d_468f_86cc_2f4dbb3799a5.slice. Mar 13 00:40:23.604584 systemd[1]: Created slice kubepods-besteffort-pod8e0c4400_8e9e_40d2_b63d_330be065ad79.slice - libcontainer container kubepods-besteffort-pod8e0c4400_8e9e_40d2_b63d_330be065ad79.slice. Mar 13 00:40:23.613784 systemd[1]: Created slice kubepods-burstable-pode6f02b0a_183f_4b3c_87a6_0ef7fdef800d.slice - libcontainer container kubepods-burstable-pode6f02b0a_183f_4b3c_87a6_0ef7fdef800d.slice. Mar 13 00:40:23.621644 systemd[1]: Created slice kubepods-besteffort-pod8774e408_f6b2_4820_93fe_f59e23d02121.slice - libcontainer container kubepods-besteffort-pod8774e408_f6b2_4820_93fe_f59e23d02121.slice. Mar 13 00:40:23.655468 kubelet[2706]: I0313 00:40:23.654991 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b4e81f6-f633-4e5a-940c-c9d165d3fd0e-config-volume\") pod \"coredns-674b8bbfcf-7qrc5\" (UID: \"8b4e81f6-f633-4e5a-940c-c9d165d3fd0e\") " pod="kube-system/coredns-674b8bbfcf-7qrc5" Mar 13 00:40:23.655468 kubelet[2706]: I0313 00:40:23.655031 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-whisker-ca-bundle\") pod \"whisker-647d86f8bc-x82kj\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " pod="calico-system/whisker-647d86f8bc-x82kj" Mar 13 00:40:23.655468 kubelet[2706]: I0313 00:40:23.655049 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e0c4400-8e9e-40d2-b63d-330be065ad79-calico-apiserver-certs\") pod \"calico-apiserver-6b5f9d757-5wn7z\" (UID: \"8e0c4400-8e9e-40d2-b63d-330be065ad79\") " pod="calico-system/calico-apiserver-6b5f9d757-5wn7z" Mar 13 00:40:23.655468 kubelet[2706]: I0313 00:40:23.655092 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b013b712-c24d-468f-86cc-2f4dbb3799a5-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-tp5kj\" (UID: \"b013b712-c24d-468f-86cc-2f4dbb3799a5\") " pod="calico-system/goldmane-5b85766d88-tp5kj" Mar 13 00:40:23.655468 kubelet[2706]: I0313 00:40:23.655213 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k729k\" (UniqueName: \"kubernetes.io/projected/b013b712-c24d-468f-86cc-2f4dbb3799a5-kube-api-access-k729k\") pod \"goldmane-5b85766d88-tp5kj\" (UID: \"b013b712-c24d-468f-86cc-2f4dbb3799a5\") " pod="calico-system/goldmane-5b85766d88-tp5kj" Mar 13 00:40:23.655950 kubelet[2706]: I0313 00:40:23.655362 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26c4e91b-29e2-464c-92bb-dfe00ec079cd-tigera-ca-bundle\") pod \"calico-kube-controllers-5c6bd96965-2jhx6\" (UID: \"26c4e91b-29e2-464c-92bb-dfe00ec079cd\") " pod="calico-system/calico-kube-controllers-5c6bd96965-2jhx6" Mar 13 00:40:23.655950 kubelet[2706]: I0313 00:40:23.655534 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8774e408-f6b2-4820-93fe-f59e23d02121-calico-apiserver-certs\") pod \"calico-apiserver-6b5f9d757-z28tt\" (UID: \"8774e408-f6b2-4820-93fe-f59e23d02121\") " pod="calico-system/calico-apiserver-6b5f9d757-z28tt" Mar 13 00:40:23.655950 kubelet[2706]: I0313 00:40:23.655783 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b013b712-c24d-468f-86cc-2f4dbb3799a5-config\") pod \"goldmane-5b85766d88-tp5kj\" (UID: \"b013b712-c24d-468f-86cc-2f4dbb3799a5\") " pod="calico-system/goldmane-5b85766d88-tp5kj" Mar 13 00:40:23.656074 kubelet[2706]: I0313 00:40:23.655994 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b013b712-c24d-468f-86cc-2f4dbb3799a5-goldmane-key-pair\") pod \"goldmane-5b85766d88-tp5kj\" (UID: \"b013b712-c24d-468f-86cc-2f4dbb3799a5\") " pod="calico-system/goldmane-5b85766d88-tp5kj" Mar 13 00:40:23.656074 kubelet[2706]: I0313 00:40:23.656025 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6f02b0a-183f-4b3c-87a6-0ef7fdef800d-config-volume\") pod \"coredns-674b8bbfcf-h2clt\" (UID: \"e6f02b0a-183f-4b3c-87a6-0ef7fdef800d\") " pod="kube-system/coredns-674b8bbfcf-h2clt" Mar 13 00:40:23.656074 kubelet[2706]: I0313 00:40:23.656052 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxj92\" (UniqueName: \"kubernetes.io/projected/8b4e81f6-f633-4e5a-940c-c9d165d3fd0e-kube-api-access-mxj92\") pod \"coredns-674b8bbfcf-7qrc5\" (UID: \"8b4e81f6-f633-4e5a-940c-c9d165d3fd0e\") " pod="kube-system/coredns-674b8bbfcf-7qrc5" Mar 13 00:40:23.656212 kubelet[2706]: I0313 00:40:23.656101 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scmjs\" (UniqueName: \"kubernetes.io/projected/26c4e91b-29e2-464c-92bb-dfe00ec079cd-kube-api-access-scmjs\") pod \"calico-kube-controllers-5c6bd96965-2jhx6\" (UID: \"26c4e91b-29e2-464c-92bb-dfe00ec079cd\") " pod="calico-system/calico-kube-controllers-5c6bd96965-2jhx6" Mar 13 00:40:23.656212 kubelet[2706]: I0313 00:40:23.656132 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2d5\" (UniqueName: \"kubernetes.io/projected/8e0c4400-8e9e-40d2-b63d-330be065ad79-kube-api-access-lx2d5\") pod \"calico-apiserver-6b5f9d757-5wn7z\" (UID: \"8e0c4400-8e9e-40d2-b63d-330be065ad79\") " pod="calico-system/calico-apiserver-6b5f9d757-5wn7z" Mar 13 00:40:23.656212 kubelet[2706]: I0313 00:40:23.656169 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvs7p\" (UniqueName: \"kubernetes.io/projected/8774e408-f6b2-4820-93fe-f59e23d02121-kube-api-access-dvs7p\") pod \"calico-apiserver-6b5f9d757-z28tt\" (UID: \"8774e408-f6b2-4820-93fe-f59e23d02121\") " pod="calico-system/calico-apiserver-6b5f9d757-z28tt" Mar 13 00:40:23.656212 kubelet[2706]: I0313 00:40:23.656190 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwxwq\" (UniqueName: \"kubernetes.io/projected/e6f02b0a-183f-4b3c-87a6-0ef7fdef800d-kube-api-access-lwxwq\") pod \"coredns-674b8bbfcf-h2clt\" (UID: \"e6f02b0a-183f-4b3c-87a6-0ef7fdef800d\") " pod="kube-system/coredns-674b8bbfcf-h2clt" Mar 13 00:40:23.656212 kubelet[2706]: I0313 00:40:23.656206 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jpmp\" (UniqueName: \"kubernetes.io/projected/4271cba1-49ad-4316-b909-26a45f24a613-kube-api-access-7jpmp\") pod \"whisker-647d86f8bc-x82kj\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " pod="calico-system/whisker-647d86f8bc-x82kj" Mar 13 00:40:23.656394 kubelet[2706]: I0313 00:40:23.656221 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-nginx-config\") pod \"whisker-647d86f8bc-x82kj\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " pod="calico-system/whisker-647d86f8bc-x82kj" Mar 13 00:40:23.656394 kubelet[2706]: I0313 00:40:23.656235 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4271cba1-49ad-4316-b909-26a45f24a613-whisker-backend-key-pair\") pod \"whisker-647d86f8bc-x82kj\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " pod="calico-system/whisker-647d86f8bc-x82kj" Mar 13 00:40:23.861551 kubelet[2706]: E0313 00:40:23.860665 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:23.864210 containerd[1555]: time="2026-03-13T00:40:23.863475407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qrc5,Uid:8b4e81f6-f633-4e5a-940c-c9d165d3fd0e,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:23.892651 containerd[1555]: time="2026-03-13T00:40:23.891842579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6bd96965-2jhx6,Uid:26c4e91b-29e2-464c-92bb-dfe00ec079cd,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:23.893031 containerd[1555]: time="2026-03-13T00:40:23.892951090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647d86f8bc-x82kj,Uid:4271cba1-49ad-4316-b909-26a45f24a613,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:23.904829 containerd[1555]: time="2026-03-13T00:40:23.904660952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tp5kj,Uid:b013b712-c24d-468f-86cc-2f4dbb3799a5,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:23.912716 containerd[1555]: time="2026-03-13T00:40:23.912481186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-5wn7z,Uid:8e0c4400-8e9e-40d2-b63d-330be065ad79,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:23.920114 kubelet[2706]: E0313 00:40:23.920049 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:23.923051 containerd[1555]: time="2026-03-13T00:40:23.922522192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2clt,Uid:e6f02b0a-183f-4b3c-87a6-0ef7fdef800d,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:23.929065 containerd[1555]: time="2026-03-13T00:40:23.928012701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-z28tt,Uid:8774e408-f6b2-4820-93fe-f59e23d02121,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:24.097483 containerd[1555]: time="2026-03-13T00:40:24.097327620Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:40:24.136287 containerd[1555]: time="2026-03-13T00:40:24.134252603Z" level=info msg="Container c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:24.166701 containerd[1555]: time="2026-03-13T00:40:24.166633668Z" level=info msg="CreateContainer within sandbox \"07a17ec96152fe251bf4e8234a48ee1d45d83acbffee67fd4db88df13aabff2b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72\"" Mar 13 00:40:24.168479 containerd[1555]: time="2026-03-13T00:40:24.167707253Z" level=info msg="StartContainer for \"c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72\"" Mar 13 00:40:24.169365 containerd[1555]: time="2026-03-13T00:40:24.169296998Z" level=info msg="connecting to shim c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72" address="unix:///run/containerd/s/85a510b040fdaea8d28a3a33598b458f219c41c27d76b9cd18b0a8e0eba5a312" protocol=ttrpc version=3 Mar 13 00:40:24.188569 containerd[1555]: time="2026-03-13T00:40:24.188495676Z" level=error msg="Failed to destroy network for sandbox \"a96a79a1fe6e076c81d8c4505b10d78dc5f46c1a4d78c77afc7ca320670717b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.195982 containerd[1555]: time="2026-03-13T00:40:24.195907927Z" level=error msg="Failed to destroy network for sandbox \"c9989773a0ad99a15dae22340954149fe19dee8863bfaee611be5e0bd2f4cbeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.196483 containerd[1555]: time="2026-03-13T00:40:24.196204244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6bd96965-2jhx6,Uid:26c4e91b-29e2-464c-92bb-dfe00ec079cd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96a79a1fe6e076c81d8c4505b10d78dc5f46c1a4d78c77afc7ca320670717b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.204335 containerd[1555]: time="2026-03-13T00:40:24.204195183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tp5kj,Uid:b013b712-c24d-468f-86cc-2f4dbb3799a5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9989773a0ad99a15dae22340954149fe19dee8863bfaee611be5e0bd2f4cbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.208051 kubelet[2706]: E0313 00:40:24.207891 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96a79a1fe6e076c81d8c4505b10d78dc5f46c1a4d78c77afc7ca320670717b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.208265 kubelet[2706]: E0313 00:40:24.208052 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96a79a1fe6e076c81d8c4505b10d78dc5f46c1a4d78c77afc7ca320670717b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c6bd96965-2jhx6" Mar 13 00:40:24.208265 kubelet[2706]: E0313 00:40:24.208119 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a96a79a1fe6e076c81d8c4505b10d78dc5f46c1a4d78c77afc7ca320670717b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c6bd96965-2jhx6" Mar 13 00:40:24.208265 kubelet[2706]: E0313 00:40:24.208170 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c6bd96965-2jhx6_calico-system(26c4e91b-29e2-464c-92bb-dfe00ec079cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c6bd96965-2jhx6_calico-system(26c4e91b-29e2-464c-92bb-dfe00ec079cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a96a79a1fe6e076c81d8c4505b10d78dc5f46c1a4d78c77afc7ca320670717b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c6bd96965-2jhx6" podUID="26c4e91b-29e2-464c-92bb-dfe00ec079cd" Mar 13 00:40:24.209003 kubelet[2706]: E0313 00:40:24.208961 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9989773a0ad99a15dae22340954149fe19dee8863bfaee611be5e0bd2f4cbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.209158 kubelet[2706]: E0313 00:40:24.209013 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9989773a0ad99a15dae22340954149fe19dee8863bfaee611be5e0bd2f4cbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-tp5kj" Mar 13 00:40:24.209158 kubelet[2706]: E0313 00:40:24.209086 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9989773a0ad99a15dae22340954149fe19dee8863bfaee611be5e0bd2f4cbeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-tp5kj" Mar 13 00:40:24.209158 kubelet[2706]: E0313 00:40:24.209132 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-tp5kj_calico-system(b013b712-c24d-468f-86cc-2f4dbb3799a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-tp5kj_calico-system(b013b712-c24d-468f-86cc-2f4dbb3799a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9989773a0ad99a15dae22340954149fe19dee8863bfaee611be5e0bd2f4cbeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-tp5kj" podUID="b013b712-c24d-468f-86cc-2f4dbb3799a5" Mar 13 00:40:24.227622 containerd[1555]: time="2026-03-13T00:40:24.227540540Z" level=error msg="Failed to destroy network for sandbox \"9fd869f1422c50f46e9b9c49a182c18201e40e1e3bbab7594965ba0b3fbbfbd3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.235590 containerd[1555]: time="2026-03-13T00:40:24.235499351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qrc5,Uid:8b4e81f6-f633-4e5a-940c-c9d165d3fd0e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd869f1422c50f46e9b9c49a182c18201e40e1e3bbab7594965ba0b3fbbfbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.235985 kubelet[2706]: E0313 00:40:24.235871 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd869f1422c50f46e9b9c49a182c18201e40e1e3bbab7594965ba0b3fbbfbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.235985 kubelet[2706]: E0313 00:40:24.235937 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd869f1422c50f46e9b9c49a182c18201e40e1e3bbab7594965ba0b3fbbfbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7qrc5" Mar 13 00:40:24.236105 kubelet[2706]: E0313 00:40:24.235973 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fd869f1422c50f46e9b9c49a182c18201e40e1e3bbab7594965ba0b3fbbfbd3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-7qrc5" Mar 13 00:40:24.236133 kubelet[2706]: E0313 00:40:24.236105 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-7qrc5_kube-system(8b4e81f6-f633-4e5a-940c-c9d165d3fd0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-7qrc5_kube-system(8b4e81f6-f633-4e5a-940c-c9d165d3fd0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fd869f1422c50f46e9b9c49a182c18201e40e1e3bbab7594965ba0b3fbbfbd3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-7qrc5" podUID="8b4e81f6-f633-4e5a-940c-c9d165d3fd0e" Mar 13 00:40:24.253747 systemd[1]: Started cri-containerd-c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72.scope - libcontainer container c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72. Mar 13 00:40:24.272862 containerd[1555]: time="2026-03-13T00:40:24.272820385Z" level=error msg="Failed to destroy network for sandbox \"10907781316a668cc13f097b3910516c34617e11f8e488e5bc3c9f06d62eeb45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.276076 containerd[1555]: time="2026-03-13T00:40:24.276006958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-647d86f8bc-x82kj,Uid:4271cba1-49ad-4316-b909-26a45f24a613,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10907781316a668cc13f097b3910516c34617e11f8e488e5bc3c9f06d62eeb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.276797 kubelet[2706]: E0313 00:40:24.276580 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10907781316a668cc13f097b3910516c34617e11f8e488e5bc3c9f06d62eeb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.276882 kubelet[2706]: E0313 00:40:24.276799 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10907781316a668cc13f097b3910516c34617e11f8e488e5bc3c9f06d62eeb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647d86f8bc-x82kj" Mar 13 00:40:24.276917 kubelet[2706]: E0313 00:40:24.276896 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10907781316a668cc13f097b3910516c34617e11f8e488e5bc3c9f06d62eeb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-647d86f8bc-x82kj" Mar 13 00:40:24.280255 kubelet[2706]: E0313 00:40:24.277758 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-647d86f8bc-x82kj_calico-system(4271cba1-49ad-4316-b909-26a45f24a613)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-647d86f8bc-x82kj_calico-system(4271cba1-49ad-4316-b909-26a45f24a613)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10907781316a668cc13f097b3910516c34617e11f8e488e5bc3c9f06d62eeb45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-647d86f8bc-x82kj" podUID="4271cba1-49ad-4316-b909-26a45f24a613" Mar 13 00:40:24.284116 containerd[1555]: time="2026-03-13T00:40:24.284030366Z" level=error msg="Failed to destroy network for sandbox \"870387889c95c0cb4df591aa31b88b6c85513a510bf3702a5037c8b38103a822\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.286335 containerd[1555]: time="2026-03-13T00:40:24.286257679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-5wn7z,Uid:8e0c4400-8e9e-40d2-b63d-330be065ad79,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"870387889c95c0cb4df591aa31b88b6c85513a510bf3702a5037c8b38103a822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.287106 kubelet[2706]: E0313 00:40:24.286868 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870387889c95c0cb4df591aa31b88b6c85513a510bf3702a5037c8b38103a822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.287106 kubelet[2706]: E0313 00:40:24.286984 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870387889c95c0cb4df591aa31b88b6c85513a510bf3702a5037c8b38103a822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b5f9d757-5wn7z" Mar 13 00:40:24.287106 kubelet[2706]: E0313 00:40:24.287007 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"870387889c95c0cb4df591aa31b88b6c85513a510bf3702a5037c8b38103a822\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b5f9d757-5wn7z" Mar 13 00:40:24.287609 kubelet[2706]: E0313 00:40:24.287582 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5f9d757-5wn7z_calico-system(8e0c4400-8e9e-40d2-b63d-330be065ad79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5f9d757-5wn7z_calico-system(8e0c4400-8e9e-40d2-b63d-330be065ad79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"870387889c95c0cb4df591aa31b88b6c85513a510bf3702a5037c8b38103a822\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b5f9d757-5wn7z" podUID="8e0c4400-8e9e-40d2-b63d-330be065ad79" Mar 13 00:40:24.294847 containerd[1555]: time="2026-03-13T00:40:24.294781617Z" level=error msg="Failed to destroy network for sandbox \"608f25f24181bb198360c40d8a9747785f7a8ce5b68f3e6b3257da3ca916b018\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.296352 containerd[1555]: time="2026-03-13T00:40:24.296248743Z" level=error msg="Failed to destroy network for sandbox \"d08d91fe9bd218392c5098f1c4efbd8bb493e5f5d3d4baa82d8dbc888e959544\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.296674 containerd[1555]: time="2026-03-13T00:40:24.296580722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2clt,Uid:e6f02b0a-183f-4b3c-87a6-0ef7fdef800d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"608f25f24181bb198360c40d8a9747785f7a8ce5b68f3e6b3257da3ca916b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.297031 kubelet[2706]: E0313 00:40:24.296953 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608f25f24181bb198360c40d8a9747785f7a8ce5b68f3e6b3257da3ca916b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.297133 kubelet[2706]: E0313 00:40:24.297083 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608f25f24181bb198360c40d8a9747785f7a8ce5b68f3e6b3257da3ca916b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2clt" Mar 13 00:40:24.297133 kubelet[2706]: E0313 00:40:24.297111 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608f25f24181bb198360c40d8a9747785f7a8ce5b68f3e6b3257da3ca916b018\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-h2clt" Mar 13 00:40:24.297195 kubelet[2706]: E0313 00:40:24.297151 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-h2clt_kube-system(e6f02b0a-183f-4b3c-87a6-0ef7fdef800d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-h2clt_kube-system(e6f02b0a-183f-4b3c-87a6-0ef7fdef800d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"608f25f24181bb198360c40d8a9747785f7a8ce5b68f3e6b3257da3ca916b018\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-h2clt" podUID="e6f02b0a-183f-4b3c-87a6-0ef7fdef800d" Mar 13 00:40:24.297953 containerd[1555]: time="2026-03-13T00:40:24.297902870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-z28tt,Uid:8774e408-f6b2-4820-93fe-f59e23d02121,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08d91fe9bd218392c5098f1c4efbd8bb493e5f5d3d4baa82d8dbc888e959544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.298453 kubelet[2706]: E0313 00:40:24.298367 2706 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08d91fe9bd218392c5098f1c4efbd8bb493e5f5d3d4baa82d8dbc888e959544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:40:24.298585 kubelet[2706]: E0313 00:40:24.298564 2706 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08d91fe9bd218392c5098f1c4efbd8bb493e5f5d3d4baa82d8dbc888e959544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b5f9d757-z28tt" Mar 13 00:40:24.298802 kubelet[2706]: E0313 00:40:24.298670 2706 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08d91fe9bd218392c5098f1c4efbd8bb493e5f5d3d4baa82d8dbc888e959544\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b5f9d757-z28tt" Mar 13 00:40:24.299165 kubelet[2706]: E0313 00:40:24.298942 2706 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b5f9d757-z28tt_calico-system(8774e408-f6b2-4820-93fe-f59e23d02121)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b5f9d757-z28tt_calico-system(8774e408-f6b2-4820-93fe-f59e23d02121)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d08d91fe9bd218392c5098f1c4efbd8bb493e5f5d3d4baa82d8dbc888e959544\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b5f9d757-z28tt" podUID="8774e408-f6b2-4820-93fe-f59e23d02121" Mar 13 00:40:24.377789 containerd[1555]: time="2026-03-13T00:40:24.377675787Z" level=info msg="StartContainer for \"c9f489e718619a2347a2bae66fe8cbec4edebb46e692d3e22f6e193d83c62c72\" returns successfully" Mar 13 00:40:25.087066 kubelet[2706]: I0313 00:40:25.086921 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-p4mvp" podStartSLOduration=4.199713002 podStartE2EDuration="20.086895819s" podCreationTimestamp="2026-03-13 00:40:05 +0000 UTC" firstStartedPulling="2026-03-13 00:40:06.299346907 +0000 UTC m=+20.161337776" lastFinishedPulling="2026-03-13 00:40:22.186529724 +0000 UTC m=+36.048520593" observedRunningTime="2026-03-13 00:40:25.085327748 +0000 UTC m=+38.947318628" watchObservedRunningTime="2026-03-13 00:40:25.086895819 +0000 UTC m=+38.948886798" Mar 13 00:40:25.174198 kubelet[2706]: I0313 00:40:25.174108 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-whisker-ca-bundle\") pod \"4271cba1-49ad-4316-b909-26a45f24a613\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " Mar 13 00:40:25.174364 kubelet[2706]: I0313 00:40:25.174205 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4271cba1-49ad-4316-b909-26a45f24a613-whisker-backend-key-pair\") pod \"4271cba1-49ad-4316-b909-26a45f24a613\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " Mar 13 00:40:25.174364 kubelet[2706]: I0313 00:40:25.174291 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jpmp\" (UniqueName: \"kubernetes.io/projected/4271cba1-49ad-4316-b909-26a45f24a613-kube-api-access-7jpmp\") pod \"4271cba1-49ad-4316-b909-26a45f24a613\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " Mar 13 00:40:25.174364 kubelet[2706]: I0313 00:40:25.174319 2706 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-nginx-config\") pod \"4271cba1-49ad-4316-b909-26a45f24a613\" (UID: \"4271cba1-49ad-4316-b909-26a45f24a613\") " Mar 13 00:40:25.176796 kubelet[2706]: I0313 00:40:25.176675 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4271cba1-49ad-4316-b909-26a45f24a613" (UID: "4271cba1-49ad-4316-b909-26a45f24a613"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:40:25.176874 kubelet[2706]: I0313 00:40:25.176836 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "4271cba1-49ad-4316-b909-26a45f24a613" (UID: "4271cba1-49ad-4316-b909-26a45f24a613"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:40:25.179855 kubelet[2706]: I0313 00:40:25.179749 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4271cba1-49ad-4316-b909-26a45f24a613-kube-api-access-7jpmp" (OuterVolumeSpecName: "kube-api-access-7jpmp") pod "4271cba1-49ad-4316-b909-26a45f24a613" (UID: "4271cba1-49ad-4316-b909-26a45f24a613"). InnerVolumeSpecName "kube-api-access-7jpmp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:40:25.180832 kubelet[2706]: I0313 00:40:25.180757 2706 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4271cba1-49ad-4316-b909-26a45f24a613-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4271cba1-49ad-4316-b909-26a45f24a613" (UID: "4271cba1-49ad-4316-b909-26a45f24a613"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:40:25.182367 systemd[1]: var-lib-kubelet-pods-4271cba1\x2d49ad\x2d4316\x2db909\x2d26a45f24a613-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7jpmp.mount: Deactivated successfully. Mar 13 00:40:25.182621 systemd[1]: var-lib-kubelet-pods-4271cba1\x2d49ad\x2d4316\x2db909\x2d26a45f24a613-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:40:25.272291 systemd[1]: Created slice kubepods-besteffort-podee00005b_f815_4ee6_a341_a1ca5e393fa9.slice - libcontainer container kubepods-besteffort-podee00005b_f815_4ee6_a341_a1ca5e393fa9.slice. Mar 13 00:40:25.275933 kubelet[2706]: I0313 00:40:25.275908 2706 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4271cba1-49ad-4316-b909-26a45f24a613-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 13 00:40:25.276024 kubelet[2706]: I0313 00:40:25.275944 2706 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jpmp\" (UniqueName: \"kubernetes.io/projected/4271cba1-49ad-4316-b909-26a45f24a613-kube-api-access-7jpmp\") on node \"localhost\" DevicePath \"\"" Mar 13 00:40:25.276024 kubelet[2706]: I0313 00:40:25.275955 2706 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 13 00:40:25.276024 kubelet[2706]: I0313 00:40:25.275962 2706 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4271cba1-49ad-4316-b909-26a45f24a613-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 13 00:40:25.276269 containerd[1555]: time="2026-03-13T00:40:25.276128448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxx9r,Uid:ee00005b-f815-4ee6-a341-a1ca5e393fa9,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:25.492064 systemd-networkd[1467]: calic4359871a3f: Link UP Mar 13 00:40:25.492622 systemd-networkd[1467]: calic4359871a3f: Gained carrier Mar 13 00:40:25.519626 containerd[1555]: 2026-03-13 00:40:25.313 [ERROR][3839] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:40:25.519626 containerd[1555]: 2026-03-13 00:40:25.343 [INFO][3839] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--wxx9r-eth0 csi-node-driver- calico-system ee00005b-f815-4ee6-a341-a1ca5e393fa9 728 0 2026-03-13 00:40:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-wxx9r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4359871a3f [] [] }} ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-" Mar 13 00:40:25.519626 containerd[1555]: 2026-03-13 00:40:25.343 [INFO][3839] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.519626 containerd[1555]: 2026-03-13 00:40:25.405 [INFO][3853] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" HandleID="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Workload="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.416 [INFO][3853] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" HandleID="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Workload="localhost-k8s-csi--node--driver--wxx9r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050b4f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-wxx9r", "timestamp":"2026-03-13 00:40:25.405826483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001e7080)} Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.416 [INFO][3853] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.416 [INFO][3853] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.416 [INFO][3853] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.421 [INFO][3853] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" host="localhost" Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.427 [INFO][3853] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.434 [INFO][3853] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.438 [INFO][3853] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.447 [INFO][3853] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:25.520141 containerd[1555]: 2026-03-13 00:40:25.447 [INFO][3853] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" host="localhost" Mar 13 00:40:25.520520 containerd[1555]: 2026-03-13 00:40:25.451 [INFO][3853] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54 Mar 13 00:40:25.520520 containerd[1555]: 2026-03-13 00:40:25.458 [INFO][3853] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" host="localhost" Mar 13 00:40:25.520520 containerd[1555]: 2026-03-13 00:40:25.468 [INFO][3853] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" host="localhost" Mar 13 00:40:25.520520 containerd[1555]: 2026-03-13 00:40:25.468 [INFO][3853] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" host="localhost" Mar 13 00:40:25.520520 containerd[1555]: 2026-03-13 00:40:25.468 [INFO][3853] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:25.520520 containerd[1555]: 2026-03-13 00:40:25.468 [INFO][3853] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" HandleID="k8s-pod-network.dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Workload="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.520662 containerd[1555]: 2026-03-13 00:40:25.475 [INFO][3839] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxx9r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ee00005b-f815-4ee6-a341-a1ca5e393fa9", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-wxx9r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4359871a3f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:25.520742 containerd[1555]: 2026-03-13 00:40:25.475 [INFO][3839] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.520742 containerd[1555]: 2026-03-13 00:40:25.475 [INFO][3839] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4359871a3f ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.520742 containerd[1555]: 2026-03-13 00:40:25.495 [INFO][3839] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.520808 containerd[1555]: 2026-03-13 00:40:25.496 [INFO][3839] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--wxx9r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ee00005b-f815-4ee6-a341-a1ca5e393fa9", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54", Pod:"csi-node-driver-wxx9r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4359871a3f", MAC:"d6:92:b0:4d:1e:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:25.520891 containerd[1555]: 2026-03-13 00:40:25.515 [INFO][3839] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" Namespace="calico-system" Pod="csi-node-driver-wxx9r" WorkloadEndpoint="localhost-k8s-csi--node--driver--wxx9r-eth0" Mar 13 00:40:25.596612 containerd[1555]: time="2026-03-13T00:40:25.596453181Z" level=info msg="connecting to shim dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54" address="unix:///run/containerd/s/b7e737a8f3afebdb8b82bee1155c31230e904e6cee8df42cfeab30e00bb0f311" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:25.640276 systemd[1]: Started cri-containerd-dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54.scope - libcontainer container dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54. Mar 13 00:40:25.662843 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:25.704258 containerd[1555]: time="2026-03-13T00:40:25.704153256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wxx9r,Uid:ee00005b-f815-4ee6-a341-a1ca5e393fa9,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54\"" Mar 13 00:40:25.712722 containerd[1555]: time="2026-03-13T00:40:25.712635005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:40:26.074054 kubelet[2706]: I0313 00:40:26.073893 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:26.094990 systemd[1]: Removed slice kubepods-besteffort-pod4271cba1_49ad_4316_b909_26a45f24a613.slice - libcontainer container kubepods-besteffort-pod4271cba1_49ad_4316_b909_26a45f24a613.slice. Mar 13 00:40:26.301566 systemd[1]: Created slice kubepods-besteffort-podc23a1fda_0b6a_47d7_a82f_c3c50623e919.slice - libcontainer container kubepods-besteffort-podc23a1fda_0b6a_47d7_a82f_c3c50623e919.slice. Mar 13 00:40:26.312323 kubelet[2706]: I0313 00:40:26.312143 2706 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4271cba1-49ad-4316-b909-26a45f24a613" path="/var/lib/kubelet/pods/4271cba1-49ad-4316-b909-26a45f24a613/volumes" Mar 13 00:40:26.387217 kubelet[2706]: I0313 00:40:26.387081 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjhq\" (UniqueName: \"kubernetes.io/projected/c23a1fda-0b6a-47d7-a82f-c3c50623e919-kube-api-access-qtjhq\") pod \"whisker-685b67d8bd-tvdgw\" (UID: \"c23a1fda-0b6a-47d7-a82f-c3c50623e919\") " pod="calico-system/whisker-685b67d8bd-tvdgw" Mar 13 00:40:26.387898 kubelet[2706]: I0313 00:40:26.387753 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/c23a1fda-0b6a-47d7-a82f-c3c50623e919-nginx-config\") pod \"whisker-685b67d8bd-tvdgw\" (UID: \"c23a1fda-0b6a-47d7-a82f-c3c50623e919\") " pod="calico-system/whisker-685b67d8bd-tvdgw" Mar 13 00:40:26.388378 kubelet[2706]: I0313 00:40:26.388008 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c23a1fda-0b6a-47d7-a82f-c3c50623e919-whisker-backend-key-pair\") pod \"whisker-685b67d8bd-tvdgw\" (UID: \"c23a1fda-0b6a-47d7-a82f-c3c50623e919\") " pod="calico-system/whisker-685b67d8bd-tvdgw" Mar 13 00:40:26.388378 kubelet[2706]: I0313 00:40:26.388334 2706 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c23a1fda-0b6a-47d7-a82f-c3c50623e919-whisker-ca-bundle\") pod \"whisker-685b67d8bd-tvdgw\" (UID: \"c23a1fda-0b6a-47d7-a82f-c3c50623e919\") " pod="calico-system/whisker-685b67d8bd-tvdgw" Mar 13 00:40:26.567540 systemd-networkd[1467]: calic4359871a3f: Gained IPv6LL Mar 13 00:40:26.614939 containerd[1555]: time="2026-03-13T00:40:26.614773156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-685b67d8bd-tvdgw,Uid:c23a1fda-0b6a-47d7-a82f-c3c50623e919,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:26.727476 containerd[1555]: time="2026-03-13T00:40:26.727230873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:26.729968 containerd[1555]: time="2026-03-13T00:40:26.729942889Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:40:26.733151 containerd[1555]: time="2026-03-13T00:40:26.733113252Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:26.750814 containerd[1555]: time="2026-03-13T00:40:26.750746585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:26.753171 containerd[1555]: time="2026-03-13T00:40:26.752762227Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.03976508s" Mar 13 00:40:26.753171 containerd[1555]: time="2026-03-13T00:40:26.752804141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:40:26.763493 containerd[1555]: time="2026-03-13T00:40:26.763360208Z" level=info msg="CreateContainer within sandbox \"dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:40:26.786303 containerd[1555]: time="2026-03-13T00:40:26.786180983Z" level=info msg="Container e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:26.807919 containerd[1555]: time="2026-03-13T00:40:26.807067503Z" level=info msg="CreateContainer within sandbox \"dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608\"" Mar 13 00:40:26.811896 containerd[1555]: time="2026-03-13T00:40:26.811815853Z" level=info msg="StartContainer for \"e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608\"" Mar 13 00:40:26.817053 containerd[1555]: time="2026-03-13T00:40:26.816973178Z" level=info msg="connecting to shim e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608" address="unix:///run/containerd/s/b7e737a8f3afebdb8b82bee1155c31230e904e6cee8df42cfeab30e00bb0f311" protocol=ttrpc version=3 Mar 13 00:40:26.889879 systemd[1]: Started cri-containerd-e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608.scope - libcontainer container e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608. Mar 13 00:40:26.935289 systemd-networkd[1467]: cali1b6f4f8e93e: Link UP Mar 13 00:40:26.936349 systemd-networkd[1467]: cali1b6f4f8e93e: Gained carrier Mar 13 00:40:26.983883 containerd[1555]: 2026-03-13 00:40:26.732 [INFO][4050] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--685b67d8bd--tvdgw-eth0 whisker-685b67d8bd- calico-system c23a1fda-0b6a-47d7-a82f-c3c50623e919 927 0 2026-03-13 00:40:26 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:685b67d8bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-685b67d8bd-tvdgw eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1b6f4f8e93e [] [] }} ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-" Mar 13 00:40:26.983883 containerd[1555]: 2026-03-13 00:40:26.733 [INFO][4050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:26.983883 containerd[1555]: 2026-03-13 00:40:26.808 [INFO][4078] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" HandleID="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Workload="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.819 [INFO][4078] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" HandleID="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Workload="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ea30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-685b67d8bd-tvdgw", "timestamp":"2026-03-13 00:40:26.808267656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00024a420)} Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.819 [INFO][4078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.819 [INFO][4078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.819 [INFO][4078] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.825 [INFO][4078] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" host="localhost" Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.853 [INFO][4078] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.866 [INFO][4078] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.874 [INFO][4078] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.882 [INFO][4078] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:26.984674 containerd[1555]: 2026-03-13 00:40:26.882 [INFO][4078] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" host="localhost" Mar 13 00:40:26.984934 containerd[1555]: 2026-03-13 00:40:26.887 [INFO][4078] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40 Mar 13 00:40:26.984934 containerd[1555]: 2026-03-13 00:40:26.905 [INFO][4078] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" host="localhost" Mar 13 00:40:26.984934 containerd[1555]: 2026-03-13 00:40:26.927 [INFO][4078] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" host="localhost" Mar 13 00:40:26.984934 containerd[1555]: 2026-03-13 00:40:26.927 [INFO][4078] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" host="localhost" Mar 13 00:40:26.984934 containerd[1555]: 2026-03-13 00:40:26.927 [INFO][4078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:26.984934 containerd[1555]: 2026-03-13 00:40:26.927 [INFO][4078] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" HandleID="k8s-pod-network.e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Workload="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:26.985649 containerd[1555]: 2026-03-13 00:40:26.931 [INFO][4050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--685b67d8bd--tvdgw-eth0", GenerateName:"whisker-685b67d8bd-", Namespace:"calico-system", SelfLink:"", UID:"c23a1fda-0b6a-47d7-a82f-c3c50623e919", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"685b67d8bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-685b67d8bd-tvdgw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b6f4f8e93e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:26.985649 containerd[1555]: 2026-03-13 00:40:26.932 [INFO][4050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:26.985892 containerd[1555]: 2026-03-13 00:40:26.932 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b6f4f8e93e ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:26.985892 containerd[1555]: 2026-03-13 00:40:26.936 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:26.986008 containerd[1555]: 2026-03-13 00:40:26.937 [INFO][4050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--685b67d8bd--tvdgw-eth0", GenerateName:"whisker-685b67d8bd-", Namespace:"calico-system", SelfLink:"", UID:"c23a1fda-0b6a-47d7-a82f-c3c50623e919", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"685b67d8bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40", Pod:"whisker-685b67d8bd-tvdgw", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1b6f4f8e93e", MAC:"4a:7f:bf:2f:17:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:26.986237 containerd[1555]: 2026-03-13 00:40:26.977 [INFO][4050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" Namespace="calico-system" Pod="whisker-685b67d8bd-tvdgw" WorkloadEndpoint="localhost-k8s-whisker--685b67d8bd--tvdgw-eth0" Mar 13 00:40:27.063955 containerd[1555]: time="2026-03-13T00:40:27.062712384Z" level=info msg="connecting to shim e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40" address="unix:///run/containerd/s/995f4afae5cf48a6183f58f7651d4f63bca991c09faaf3e8628f0a4aaa2ae5c0" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:27.106490 containerd[1555]: time="2026-03-13T00:40:27.105889688Z" level=info msg="StartContainer for \"e65f9ac577260bfaefc049defafeccdaec452a8fa5ac71fc87c06aec07c30608\" returns successfully" Mar 13 00:40:27.111479 containerd[1555]: time="2026-03-13T00:40:27.110016387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:40:27.193738 systemd[1]: Started cri-containerd-e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40.scope - libcontainer container e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40. Mar 13 00:40:27.217916 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:27.311719 containerd[1555]: time="2026-03-13T00:40:27.310379426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-685b67d8bd-tvdgw,Uid:c23a1fda-0b6a-47d7-a82f-c3c50623e919,Namespace:calico-system,Attempt:0,} returns sandbox id \"e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40\"" Mar 13 00:40:27.884096 systemd-networkd[1467]: vxlan.calico: Link UP Mar 13 00:40:27.884113 systemd-networkd[1467]: vxlan.calico: Gained carrier Mar 13 00:40:28.331389 containerd[1555]: time="2026-03-13T00:40:28.331290581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:28.333054 containerd[1555]: time="2026-03-13T00:40:28.332866629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:40:28.334600 containerd[1555]: time="2026-03-13T00:40:28.334534429Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:28.367071 containerd[1555]: time="2026-03-13T00:40:28.367014762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:28.369664 containerd[1555]: time="2026-03-13T00:40:28.369582072Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.259500513s" Mar 13 00:40:28.369664 containerd[1555]: time="2026-03-13T00:40:28.369659219Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:40:28.375198 containerd[1555]: time="2026-03-13T00:40:28.374209817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:40:28.386699 containerd[1555]: time="2026-03-13T00:40:28.386619664Z" level=info msg="CreateContainer within sandbox \"dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:40:28.403875 containerd[1555]: time="2026-03-13T00:40:28.403749794Z" level=info msg="Container db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:28.421483 containerd[1555]: time="2026-03-13T00:40:28.421291033Z" level=info msg="CreateContainer within sandbox \"dbccc95ed878f5ace91e2f4d3337305378e43e5cee3a5120f93b3711f7273d54\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f\"" Mar 13 00:40:28.422529 containerd[1555]: time="2026-03-13T00:40:28.422363908Z" level=info msg="StartContainer for \"db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f\"" Mar 13 00:40:28.425936 containerd[1555]: time="2026-03-13T00:40:28.425665284Z" level=info msg="connecting to shim db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f" address="unix:///run/containerd/s/b7e737a8f3afebdb8b82bee1155c31230e904e6cee8df42cfeab30e00bb0f311" protocol=ttrpc version=3 Mar 13 00:40:28.473715 systemd[1]: Started cri-containerd-db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f.scope - libcontainer container db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f. Mar 13 00:40:28.623478 containerd[1555]: time="2026-03-13T00:40:28.623154272Z" level=info msg="StartContainer for \"db7f91b7c7fda23c942d3fdafdec45c8c7b2827d31d8421e18416c79474b0b1f\" returns successfully" Mar 13 00:40:28.677693 systemd-networkd[1467]: cali1b6f4f8e93e: Gained IPv6LL Mar 13 00:40:28.932714 systemd-networkd[1467]: vxlan.calico: Gained IPv6LL Mar 13 00:40:29.002973 containerd[1555]: time="2026-03-13T00:40:29.002863542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:29.004146 containerd[1555]: time="2026-03-13T00:40:29.004056917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:40:29.005758 containerd[1555]: time="2026-03-13T00:40:29.005626562Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:29.008230 containerd[1555]: time="2026-03-13T00:40:29.008205247Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:29.008749 containerd[1555]: time="2026-03-13T00:40:29.008657932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 633.683257ms" Mar 13 00:40:29.008749 containerd[1555]: time="2026-03-13T00:40:29.008702023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:40:29.016291 containerd[1555]: time="2026-03-13T00:40:29.016188657Z" level=info msg="CreateContainer within sandbox \"e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:40:29.024752 containerd[1555]: time="2026-03-13T00:40:29.024703291Z" level=info msg="Container f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:29.036231 containerd[1555]: time="2026-03-13T00:40:29.036112808Z" level=info msg="CreateContainer within sandbox \"e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7\"" Mar 13 00:40:29.037043 containerd[1555]: time="2026-03-13T00:40:29.036916733Z" level=info msg="StartContainer for \"f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7\"" Mar 13 00:40:29.041250 containerd[1555]: time="2026-03-13T00:40:29.041080241Z" level=info msg="connecting to shim f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7" address="unix:///run/containerd/s/995f4afae5cf48a6183f58f7651d4f63bca991c09faaf3e8628f0a4aaa2ae5c0" protocol=ttrpc version=3 Mar 13 00:40:29.081675 systemd[1]: Started cri-containerd-f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7.scope - libcontainer container f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7. Mar 13 00:40:29.165171 kubelet[2706]: I0313 00:40:29.165090 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wxx9r" podStartSLOduration=21.503305998 podStartE2EDuration="24.165068471s" podCreationTimestamp="2026-03-13 00:40:05 +0000 UTC" firstStartedPulling="2026-03-13 00:40:25.711849325 +0000 UTC m=+39.573840195" lastFinishedPulling="2026-03-13 00:40:28.373611799 +0000 UTC m=+42.235602668" observedRunningTime="2026-03-13 00:40:29.164601388 +0000 UTC m=+43.026592258" watchObservedRunningTime="2026-03-13 00:40:29.165068471 +0000 UTC m=+43.027059341" Mar 13 00:40:29.175904 containerd[1555]: time="2026-03-13T00:40:29.175858806Z" level=info msg="StartContainer for \"f7dc99086941b17d76cabd92980e6e8b97863981e236dbbdf8e7d447791dc3c7\" returns successfully" Mar 13 00:40:29.178538 containerd[1555]: time="2026-03-13T00:40:29.178511411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:40:29.458498 kubelet[2706]: I0313 00:40:29.458034 2706 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:40:29.459591 kubelet[2706]: I0313 00:40:29.459483 2706 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:40:30.073062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount286537210.mount: Deactivated successfully. Mar 13 00:40:30.108963 containerd[1555]: time="2026-03-13T00:40:30.108801090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:30.110180 containerd[1555]: time="2026-03-13T00:40:30.110079396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:40:30.111813 containerd[1555]: time="2026-03-13T00:40:30.111752541Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:30.115613 containerd[1555]: time="2026-03-13T00:40:30.115526936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:30.116890 containerd[1555]: time="2026-03-13T00:40:30.116773888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 938.099326ms" Mar 13 00:40:30.116890 containerd[1555]: time="2026-03-13T00:40:30.116844973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:40:30.126044 containerd[1555]: time="2026-03-13T00:40:30.125964464Z" level=info msg="CreateContainer within sandbox \"e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:40:30.140348 containerd[1555]: time="2026-03-13T00:40:30.140219569Z" level=info msg="Container abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:30.168775 containerd[1555]: time="2026-03-13T00:40:30.168672446Z" level=info msg="CreateContainer within sandbox \"e40f7a111f7e9108b7b783d56a66394ad43cbb7420026917bc0ed628bc5d4c40\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b\"" Mar 13 00:40:30.170260 containerd[1555]: time="2026-03-13T00:40:30.170204968Z" level=info msg="StartContainer for \"abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b\"" Mar 13 00:40:30.172112 containerd[1555]: time="2026-03-13T00:40:30.172069734Z" level=info msg="connecting to shim abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b" address="unix:///run/containerd/s/995f4afae5cf48a6183f58f7651d4f63bca991c09faaf3e8628f0a4aaa2ae5c0" protocol=ttrpc version=3 Mar 13 00:40:30.209710 systemd[1]: Started cri-containerd-abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b.scope - libcontainer container abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b. Mar 13 00:40:30.292277 containerd[1555]: time="2026-03-13T00:40:30.292114810Z" level=info msg="StartContainer for \"abc700d7840f295551a5ea2821e06da0f65d9a3f79c6cb047163e1e9f7dc2b2b\" returns successfully" Mar 13 00:40:31.167243 kubelet[2706]: I0313 00:40:31.166828 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-685b67d8bd-tvdgw" podStartSLOduration=2.367454409 podStartE2EDuration="5.166804196s" podCreationTimestamp="2026-03-13 00:40:26 +0000 UTC" firstStartedPulling="2026-03-13 00:40:27.318917126 +0000 UTC m=+41.180907985" lastFinishedPulling="2026-03-13 00:40:30.118266903 +0000 UTC m=+43.980257772" observedRunningTime="2026-03-13 00:40:31.166389338 +0000 UTC m=+45.028380227" watchObservedRunningTime="2026-03-13 00:40:31.166804196 +0000 UTC m=+45.028795065" Mar 13 00:40:35.369180 kubelet[2706]: I0313 00:40:35.368772 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:36.264305 containerd[1555]: time="2026-03-13T00:40:36.263770833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-5wn7z,Uid:8e0c4400-8e9e-40d2-b63d-330be065ad79,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:36.265011 containerd[1555]: time="2026-03-13T00:40:36.264553193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-z28tt,Uid:8774e408-f6b2-4820-93fe-f59e23d02121,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:36.508037 systemd-networkd[1467]: cali3796e7a3e68: Link UP Mar 13 00:40:36.509571 systemd-networkd[1467]: cali3796e7a3e68: Gained carrier Mar 13 00:40:36.529963 containerd[1555]: 2026-03-13 00:40:36.358 [INFO][4453] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0 calico-apiserver-6b5f9d757- calico-system 8774e408-f6b2-4820-93fe-f59e23d02121 866 0 2026-03-13 00:40:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5f9d757 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5f9d757-z28tt eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3796e7a3e68 [] [] }} ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-" Mar 13 00:40:36.529963 containerd[1555]: 2026-03-13 00:40:36.359 [INFO][4453] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.529963 containerd[1555]: 2026-03-13 00:40:36.427 [INFO][4482] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" HandleID="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Workload="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.438 [INFO][4482] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" HandleID="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Workload="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000225bf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6b5f9d757-z28tt", "timestamp":"2026-03-13 00:40:36.427762273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006f2580)} Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.439 [INFO][4482] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.439 [INFO][4482] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.439 [INFO][4482] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.446 [INFO][4482] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" host="localhost" Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.454 [INFO][4482] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.464 [INFO][4482] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.468 [INFO][4482] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.471 [INFO][4482] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:36.531772 containerd[1555]: 2026-03-13 00:40:36.472 [INFO][4482] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" host="localhost" Mar 13 00:40:36.533046 containerd[1555]: 2026-03-13 00:40:36.474 [INFO][4482] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521 Mar 13 00:40:36.533046 containerd[1555]: 2026-03-13 00:40:36.481 [INFO][4482] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" host="localhost" Mar 13 00:40:36.533046 containerd[1555]: 2026-03-13 00:40:36.493 [INFO][4482] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" host="localhost" Mar 13 00:40:36.533046 containerd[1555]: 2026-03-13 00:40:36.493 [INFO][4482] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" host="localhost" Mar 13 00:40:36.533046 containerd[1555]: 2026-03-13 00:40:36.493 [INFO][4482] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:36.533046 containerd[1555]: 2026-03-13 00:40:36.493 [INFO][4482] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" HandleID="k8s-pod-network.ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Workload="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.533291 containerd[1555]: 2026-03-13 00:40:36.497 [INFO][4453] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0", GenerateName:"calico-apiserver-6b5f9d757-", Namespace:"calico-system", SelfLink:"", UID:"8774e408-f6b2-4820-93fe-f59e23d02121", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5f9d757", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5f9d757-z28tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3796e7a3e68", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:36.534518 containerd[1555]: 2026-03-13 00:40:36.497 [INFO][4453] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.534518 containerd[1555]: 2026-03-13 00:40:36.497 [INFO][4453] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3796e7a3e68 ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.534518 containerd[1555]: 2026-03-13 00:40:36.501 [INFO][4453] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.534819 containerd[1555]: 2026-03-13 00:40:36.501 [INFO][4453] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0", GenerateName:"calico-apiserver-6b5f9d757-", Namespace:"calico-system", SelfLink:"", UID:"8774e408-f6b2-4820-93fe-f59e23d02121", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5f9d757", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521", Pod:"calico-apiserver-6b5f9d757-z28tt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3796e7a3e68", MAC:"ea:3d:eb:b5:ca:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:36.535320 containerd[1555]: 2026-03-13 00:40:36.517 [INFO][4453] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-z28tt" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--z28tt-eth0" Mar 13 00:40:36.606050 containerd[1555]: time="2026-03-13T00:40:36.606010956Z" level=info msg="connecting to shim ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521" address="unix:///run/containerd/s/0bec895767ac364f0f08dfe9d10d29c18f93eae63922a199942d32b050cecf00" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:36.669568 systemd-networkd[1467]: cali5ad7dedef98: Link UP Mar 13 00:40:36.673276 systemd-networkd[1467]: cali5ad7dedef98: Gained carrier Mar 13 00:40:36.701736 systemd[1]: Started cri-containerd-ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521.scope - libcontainer container ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521. Mar 13 00:40:36.705942 containerd[1555]: 2026-03-13 00:40:36.357 [INFO][4452] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0 calico-apiserver-6b5f9d757- calico-system 8e0c4400-8e9e-40d2-b63d-330be065ad79 865 0 2026-03-13 00:40:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b5f9d757 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b5f9d757-5wn7z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5ad7dedef98 [] [] }} ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-" Mar 13 00:40:36.705942 containerd[1555]: 2026-03-13 00:40:36.358 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.705942 containerd[1555]: 2026-03-13 00:40:36.426 [INFO][4480] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" HandleID="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Workload="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.438 [INFO][4480] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" HandleID="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Workload="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6b5f9d757-5wn7z", "timestamp":"2026-03-13 00:40:36.426796013 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00001f1e0)} Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.439 [INFO][4480] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.494 [INFO][4480] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.494 [INFO][4480] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.554 [INFO][4480] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" host="localhost" Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.567 [INFO][4480] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.589 [INFO][4480] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.593 [INFO][4480] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.602 [INFO][4480] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:36.706151 containerd[1555]: 2026-03-13 00:40:36.606 [INFO][4480] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" host="localhost" Mar 13 00:40:36.708288 containerd[1555]: 2026-03-13 00:40:36.611 [INFO][4480] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000 Mar 13 00:40:36.708288 containerd[1555]: 2026-03-13 00:40:36.619 [INFO][4480] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" host="localhost" Mar 13 00:40:36.708288 containerd[1555]: 2026-03-13 00:40:36.633 [INFO][4480] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" host="localhost" Mar 13 00:40:36.708288 containerd[1555]: 2026-03-13 00:40:36.635 [INFO][4480] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" host="localhost" Mar 13 00:40:36.708288 containerd[1555]: 2026-03-13 00:40:36.635 [INFO][4480] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:36.708288 containerd[1555]: 2026-03-13 00:40:36.635 [INFO][4480] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" HandleID="k8s-pod-network.4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Workload="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.708472 containerd[1555]: 2026-03-13 00:40:36.663 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0", GenerateName:"calico-apiserver-6b5f9d757-", Namespace:"calico-system", SelfLink:"", UID:"8e0c4400-8e9e-40d2-b63d-330be065ad79", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5f9d757", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b5f9d757-5wn7z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5ad7dedef98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:36.708560 containerd[1555]: 2026-03-13 00:40:36.663 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.708560 containerd[1555]: 2026-03-13 00:40:36.663 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ad7dedef98 ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.708560 containerd[1555]: 2026-03-13 00:40:36.674 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.708623 containerd[1555]: 2026-03-13 00:40:36.676 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0", GenerateName:"calico-apiserver-6b5f9d757-", Namespace:"calico-system", SelfLink:"", UID:"8e0c4400-8e9e-40d2-b63d-330be065ad79", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b5f9d757", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000", Pod:"calico-apiserver-6b5f9d757-5wn7z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5ad7dedef98", MAC:"42:6e:5f:4a:e2:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:36.708702 containerd[1555]: 2026-03-13 00:40:36.698 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" Namespace="calico-system" Pod="calico-apiserver-6b5f9d757-5wn7z" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b5f9d757--5wn7z-eth0" Mar 13 00:40:36.768787 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:36.770610 containerd[1555]: time="2026-03-13T00:40:36.770512104Z" level=info msg="connecting to shim 4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000" address="unix:///run/containerd/s/16a23e3e11780b0cc8fb5cef171bc77ce675b37dee2855b45d51367546fb9bfb" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:36.806942 systemd[1]: Started cri-containerd-4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000.scope - libcontainer container 4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000. Mar 13 00:40:36.887476 containerd[1555]: time="2026-03-13T00:40:36.887316362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-z28tt,Uid:8774e408-f6b2-4820-93fe-f59e23d02121,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521\"" Mar 13 00:40:36.893355 containerd[1555]: time="2026-03-13T00:40:36.893273468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:40:36.894065 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:36.979959 containerd[1555]: time="2026-03-13T00:40:36.979914103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b5f9d757-5wn7z,Uid:8e0c4400-8e9e-40d2-b63d-330be065ad79,Namespace:calico-system,Attempt:0,} returns sandbox id \"4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000\"" Mar 13 00:40:37.265729 containerd[1555]: time="2026-03-13T00:40:37.265597627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6bd96965-2jhx6,Uid:26c4e91b-29e2-464c-92bb-dfe00ec079cd,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:37.603186 systemd-networkd[1467]: cali4fdd878716a: Link UP Mar 13 00:40:37.609991 systemd-networkd[1467]: cali4fdd878716a: Gained carrier Mar 13 00:40:37.648199 containerd[1555]: 2026-03-13 00:40:37.401 [INFO][4631] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0 calico-kube-controllers-5c6bd96965- calico-system 26c4e91b-29e2-464c-92bb-dfe00ec079cd 863 0 2026-03-13 00:40:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c6bd96965 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c6bd96965-2jhx6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4fdd878716a [] [] }} ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-" Mar 13 00:40:37.648199 containerd[1555]: 2026-03-13 00:40:37.401 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.648199 containerd[1555]: 2026-03-13 00:40:37.482 [INFO][4646] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" HandleID="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Workload="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.498 [INFO][4646] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" HandleID="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Workload="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00049ea50), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c6bd96965-2jhx6", "timestamp":"2026-03-13 00:40:37.482365727 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006826e0)} Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.498 [INFO][4646] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.498 [INFO][4646] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.498 [INFO][4646] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.505 [INFO][4646] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" host="localhost" Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.521 [INFO][4646] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.529 [INFO][4646] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.535 [INFO][4646] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:37.649100 containerd[1555]: 2026-03-13 00:40:37.545 [INFO][4646] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.545 [INFO][4646] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" host="localhost" Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.570 [INFO][4646] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533 Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.579 [INFO][4646] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" host="localhost" Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.590 [INFO][4646] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" host="localhost" Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.590 [INFO][4646] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" host="localhost" Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.590 [INFO][4646] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:37.649699 containerd[1555]: 2026-03-13 00:40:37.591 [INFO][4646] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" HandleID="k8s-pod-network.b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Workload="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.649884 containerd[1555]: 2026-03-13 00:40:37.594 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0", GenerateName:"calico-kube-controllers-5c6bd96965-", Namespace:"calico-system", SelfLink:"", UID:"26c4e91b-29e2-464c-92bb-dfe00ec079cd", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c6bd96965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c6bd96965-2jhx6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fdd878716a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:37.650030 containerd[1555]: 2026-03-13 00:40:37.594 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.650030 containerd[1555]: 2026-03-13 00:40:37.594 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fdd878716a ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.650030 containerd[1555]: 2026-03-13 00:40:37.611 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.650132 containerd[1555]: 2026-03-13 00:40:37.616 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0", GenerateName:"calico-kube-controllers-5c6bd96965-", Namespace:"calico-system", SelfLink:"", UID:"26c4e91b-29e2-464c-92bb-dfe00ec079cd", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c6bd96965", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533", Pod:"calico-kube-controllers-5c6bd96965-2jhx6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4fdd878716a", MAC:"aa:d5:43:5f:82:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:37.650295 containerd[1555]: 2026-03-13 00:40:37.635 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" Namespace="calico-system" Pod="calico-kube-controllers-5c6bd96965-2jhx6" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c6bd96965--2jhx6-eth0" Mar 13 00:40:37.813726 containerd[1555]: time="2026-03-13T00:40:37.813659329Z" level=info msg="connecting to shim b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533" address="unix:///run/containerd/s/3ccab395b90c876b2724ed00d1bd2a01c3359fbb5cfac9f4313ab72c3fad4d1c" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:37.862831 systemd[1]: Started cri-containerd-b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533.scope - libcontainer container b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533. Mar 13 00:40:37.917552 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:37.981332 containerd[1555]: time="2026-03-13T00:40:37.981077574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c6bd96965-2jhx6,Uid:26c4e91b-29e2-464c-92bb-dfe00ec079cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533\"" Mar 13 00:40:38.148851 systemd-networkd[1467]: cali3796e7a3e68: Gained IPv6LL Mar 13 00:40:38.212913 systemd-networkd[1467]: cali5ad7dedef98: Gained IPv6LL Mar 13 00:40:38.264235 kubelet[2706]: E0313 00:40:38.264164 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:38.265782 containerd[1555]: time="2026-03-13T00:40:38.265607602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tp5kj,Uid:b013b712-c24d-468f-86cc-2f4dbb3799a5,Namespace:calico-system,Attempt:0,}" Mar 13 00:40:38.267477 containerd[1555]: time="2026-03-13T00:40:38.267323588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2clt,Uid:e6f02b0a-183f-4b3c-87a6-0ef7fdef800d,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:38.529548 systemd-networkd[1467]: cali72b61c7082a: Link UP Mar 13 00:40:38.530004 systemd-networkd[1467]: cali72b61c7082a: Gained carrier Mar 13 00:40:38.573732 containerd[1555]: 2026-03-13 00:40:38.356 [INFO][4728] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--h2clt-eth0 coredns-674b8bbfcf- kube-system e6f02b0a-183f-4b3c-87a6-0ef7fdef800d 867 0 2026-03-13 00:39:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-h2clt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali72b61c7082a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-" Mar 13 00:40:38.573732 containerd[1555]: 2026-03-13 00:40:38.357 [INFO][4728] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.573732 containerd[1555]: 2026-03-13 00:40:38.427 [INFO][4759] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" HandleID="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Workload="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.442 [INFO][4759] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" HandleID="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Workload="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000283580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-h2clt", "timestamp":"2026-03-13 00:40:38.427365726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000528840)} Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.443 [INFO][4759] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.452 [INFO][4759] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.452 [INFO][4759] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.457 [INFO][4759] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" host="localhost" Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.465 [INFO][4759] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.473 [INFO][4759] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.477 [INFO][4759] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.481 [INFO][4759] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:38.574575 containerd[1555]: 2026-03-13 00:40:38.481 [INFO][4759] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" host="localhost" Mar 13 00:40:38.575012 containerd[1555]: 2026-03-13 00:40:38.483 [INFO][4759] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63 Mar 13 00:40:38.575012 containerd[1555]: 2026-03-13 00:40:38.493 [INFO][4759] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" host="localhost" Mar 13 00:40:38.575012 containerd[1555]: 2026-03-13 00:40:38.506 [INFO][4759] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" host="localhost" Mar 13 00:40:38.575012 containerd[1555]: 2026-03-13 00:40:38.506 [INFO][4759] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" host="localhost" Mar 13 00:40:38.575012 containerd[1555]: 2026-03-13 00:40:38.508 [INFO][4759] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:38.575012 containerd[1555]: 2026-03-13 00:40:38.508 [INFO][4759] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" HandleID="k8s-pod-network.0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Workload="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.575144 containerd[1555]: 2026-03-13 00:40:38.514 [INFO][4728] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h2clt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e6f02b0a-183f-4b3c-87a6-0ef7fdef800d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-h2clt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72b61c7082a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:38.575295 containerd[1555]: 2026-03-13 00:40:38.517 [INFO][4728] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.575295 containerd[1555]: 2026-03-13 00:40:38.517 [INFO][4728] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali72b61c7082a ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.575295 containerd[1555]: 2026-03-13 00:40:38.531 [INFO][4728] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.575390 containerd[1555]: 2026-03-13 00:40:38.532 [INFO][4728] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--h2clt-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e6f02b0a-183f-4b3c-87a6-0ef7fdef800d", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63", Pod:"coredns-674b8bbfcf-h2clt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali72b61c7082a", MAC:"7e:cb:7b:48:db:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:38.575390 containerd[1555]: 2026-03-13 00:40:38.563 [INFO][4728] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" Namespace="kube-system" Pod="coredns-674b8bbfcf-h2clt" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--h2clt-eth0" Mar 13 00:40:38.653261 systemd-networkd[1467]: calid10becbd42c: Link UP Mar 13 00:40:38.653721 systemd-networkd[1467]: calid10becbd42c: Gained carrier Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.355 [INFO][4738] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--tp5kj-eth0 goldmane-5b85766d88- calico-system b013b712-c24d-468f-86cc-2f4dbb3799a5 864 0 2026-03-13 00:40:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-tp5kj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid10becbd42c [] [] }} ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.356 [INFO][4738] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.435 [INFO][4761] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" HandleID="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Workload="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.454 [INFO][4761] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" HandleID="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Workload="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004cf430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-tp5kj", "timestamp":"2026-03-13 00:40:38.435873765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001fadc0)} Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.454 [INFO][4761] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.508 [INFO][4761] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.509 [INFO][4761] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.561 [INFO][4761] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.578 [INFO][4761] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.596 [INFO][4761] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.601 [INFO][4761] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.605 [INFO][4761] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.605 [INFO][4761] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.608 [INFO][4761] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.615 [INFO][4761] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.625 [INFO][4761] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.625 [INFO][4761] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" host="localhost" Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.625 [INFO][4761] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:38.698718 containerd[1555]: 2026-03-13 00:40:38.625 [INFO][4761] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" HandleID="k8s-pod-network.8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Workload="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.699360 containerd[1555]: 2026-03-13 00:40:38.629 [INFO][4738] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--tp5kj-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"b013b712-c24d-468f-86cc-2f4dbb3799a5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-tp5kj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid10becbd42c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:38.699360 containerd[1555]: 2026-03-13 00:40:38.631 [INFO][4738] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.699360 containerd[1555]: 2026-03-13 00:40:38.631 [INFO][4738] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid10becbd42c ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.699360 containerd[1555]: 2026-03-13 00:40:38.652 [INFO][4738] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.699360 containerd[1555]: 2026-03-13 00:40:38.658 [INFO][4738] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--tp5kj-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"b013b712-c24d-468f-86cc-2f4dbb3799a5", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 40, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b", Pod:"goldmane-5b85766d88-tp5kj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid10becbd42c", MAC:"a2:3e:3e:cb:75:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:38.699360 containerd[1555]: 2026-03-13 00:40:38.687 [INFO][4738] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" Namespace="calico-system" Pod="goldmane-5b85766d88-tp5kj" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--tp5kj-eth0" Mar 13 00:40:38.701100 containerd[1555]: time="2026-03-13T00:40:38.701019300Z" level=info msg="connecting to shim 0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63" address="unix:///run/containerd/s/c90b8ed68db16276b7e81bed3a2e25361eec24e0205681a13cb38c380cf66589" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:38.777020 containerd[1555]: time="2026-03-13T00:40:38.776509036Z" level=info msg="connecting to shim 8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b" address="unix:///run/containerd/s/b39c6ea17bbf51fc189701fca7a1fee1f731c25f5b4e99a1b7e561998d8bb889" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:38.788711 systemd[1]: Started cri-containerd-0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63.scope - libcontainer container 0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63. Mar 13 00:40:38.830775 systemd[1]: Started cri-containerd-8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b.scope - libcontainer container 8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b. Mar 13 00:40:38.855323 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:38.887658 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:38.954709 containerd[1555]: time="2026-03-13T00:40:38.954109118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-h2clt,Uid:e6f02b0a-183f-4b3c-87a6-0ef7fdef800d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63\"" Mar 13 00:40:38.957070 kubelet[2706]: E0313 00:40:38.956959 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:38.967106 containerd[1555]: time="2026-03-13T00:40:38.966969836Z" level=info msg="CreateContainer within sandbox \"0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:40:38.994213 containerd[1555]: time="2026-03-13T00:40:38.993750446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-tp5kj,Uid:b013b712-c24d-468f-86cc-2f4dbb3799a5,Namespace:calico-system,Attempt:0,} returns sandbox id \"8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b\"" Mar 13 00:40:39.007056 containerd[1555]: time="2026-03-13T00:40:39.006880287Z" level=info msg="Container e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:39.021314 containerd[1555]: time="2026-03-13T00:40:39.021221930Z" level=info msg="CreateContainer within sandbox \"0212a7c256573f0d99f4f3ab32c388c7f1f49ac623e90c09fec8c88a69668b63\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163\"" Mar 13 00:40:39.023915 containerd[1555]: time="2026-03-13T00:40:39.023797265Z" level=info msg="StartContainer for \"e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163\"" Mar 13 00:40:39.025316 containerd[1555]: time="2026-03-13T00:40:39.025186176Z" level=info msg="connecting to shim e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163" address="unix:///run/containerd/s/c90b8ed68db16276b7e81bed3a2e25361eec24e0205681a13cb38c380cf66589" protocol=ttrpc version=3 Mar 13 00:40:39.065685 systemd[1]: Started cri-containerd-e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163.scope - libcontainer container e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163. Mar 13 00:40:39.109761 systemd-networkd[1467]: cali4fdd878716a: Gained IPv6LL Mar 13 00:40:39.224997 containerd[1555]: time="2026-03-13T00:40:39.224835453Z" level=info msg="StartContainer for \"e8f5419a1fb447012fbe2d29c4474e73cf8bbb882687629e2cf4524c26200163\" returns successfully" Mar 13 00:40:39.262964 kubelet[2706]: E0313 00:40:39.262903 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:39.263994 containerd[1555]: time="2026-03-13T00:40:39.263312420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qrc5,Uid:8b4e81f6-f633-4e5a-940c-c9d165d3fd0e,Namespace:kube-system,Attempt:0,}" Mar 13 00:40:39.629569 systemd-networkd[1467]: cali77b3cf67a08: Link UP Mar 13 00:40:39.630988 systemd-networkd[1467]: cali77b3cf67a08: Gained carrier Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.434 [INFO][4938] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0 coredns-674b8bbfcf- kube-system 8b4e81f6-f633-4e5a-940c-c9d165d3fd0e 857 0 2026-03-13 00:39:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-7qrc5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali77b3cf67a08 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.434 [INFO][4938] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.537 [INFO][4958] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" HandleID="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Workload="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.560 [INFO][4958] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" HandleID="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Workload="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000390e20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-7qrc5", "timestamp":"2026-03-13 00:40:39.537864646 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00059bb80)} Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.560 [INFO][4958] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.560 [INFO][4958] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.560 [INFO][4958] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.564 [INFO][4958] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.571 [INFO][4958] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.580 [INFO][4958] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.583 [INFO][4958] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.589 [INFO][4958] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.589 [INFO][4958] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.593 [INFO][4958] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53 Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.603 [INFO][4958] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.618 [INFO][4958] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.618 [INFO][4958] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" host="localhost" Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.618 [INFO][4958] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:40:39.680207 containerd[1555]: 2026-03-13 00:40:39.618 [INFO][4958] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" HandleID="k8s-pod-network.4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Workload="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.682645 containerd[1555]: 2026-03-13 00:40:39.623 [INFO][4938] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8b4e81f6-f633-4e5a-940c-c9d165d3fd0e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-7qrc5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77b3cf67a08", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:39.682645 containerd[1555]: 2026-03-13 00:40:39.623 [INFO][4938] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.682645 containerd[1555]: 2026-03-13 00:40:39.623 [INFO][4938] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77b3cf67a08 ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.682645 containerd[1555]: 2026-03-13 00:40:39.632 [INFO][4938] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.682645 containerd[1555]: 2026-03-13 00:40:39.634 [INFO][4938] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"8b4e81f6-f633-4e5a-940c-c9d165d3fd0e", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 39, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53", Pod:"coredns-674b8bbfcf-7qrc5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali77b3cf67a08", MAC:"fa:ce:89:39:08:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:40:39.682645 containerd[1555]: 2026-03-13 00:40:39.666 [INFO][4938] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" Namespace="kube-system" Pod="coredns-674b8bbfcf-7qrc5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--7qrc5-eth0" Mar 13 00:40:39.758537 containerd[1555]: time="2026-03-13T00:40:39.758460839Z" level=info msg="connecting to shim 4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53" address="unix:///run/containerd/s/152d84e3af9aabbe5a9238e94c4115004e402322f9fcbcd8045a6095676877aa" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:40:39.828665 systemd[1]: Started cri-containerd-4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53.scope - libcontainer container 4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53. Mar 13 00:40:39.869168 systemd-resolved[1469]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 13 00:40:39.919602 containerd[1555]: time="2026-03-13T00:40:39.918311697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-7qrc5,Uid:8b4e81f6-f633-4e5a-940c-c9d165d3fd0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53\"" Mar 13 00:40:39.921378 kubelet[2706]: E0313 00:40:39.921338 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:39.929630 containerd[1555]: time="2026-03-13T00:40:39.929540951Z" level=info msg="CreateContainer within sandbox \"4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:40:39.956337 containerd[1555]: time="2026-03-13T00:40:39.955034650Z" level=info msg="Container 08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:39.967039 containerd[1555]: time="2026-03-13T00:40:39.963735490Z" level=info msg="CreateContainer within sandbox \"4983451cd27df3d99a58ff2d1a6ea38d25113f7383698080c3ad4a65da4edd53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89\"" Mar 13 00:40:39.967039 containerd[1555]: time="2026-03-13T00:40:39.965445384Z" level=info msg="StartContainer for \"08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89\"" Mar 13 00:40:39.967039 containerd[1555]: time="2026-03-13T00:40:39.966476277Z" level=info msg="connecting to shim 08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89" address="unix:///run/containerd/s/152d84e3af9aabbe5a9238e94c4115004e402322f9fcbcd8045a6095676877aa" protocol=ttrpc version=3 Mar 13 00:40:39.993275 containerd[1555]: time="2026-03-13T00:40:39.986303097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:39.993275 containerd[1555]: time="2026-03-13T00:40:39.989387335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:40:39.993275 containerd[1555]: time="2026-03-13T00:40:39.991098495Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:39.995113 containerd[1555]: time="2026-03-13T00:40:39.994315961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:40.015485 containerd[1555]: time="2026-03-13T00:40:39.995556724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.101572323s" Mar 13 00:40:40.015485 containerd[1555]: time="2026-03-13T00:40:39.995692488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:40:40.015485 containerd[1555]: time="2026-03-13T00:40:39.997728609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:40:40.015485 containerd[1555]: time="2026-03-13T00:40:40.007951261Z" level=info msg="CreateContainer within sandbox \"ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:40:40.018550 systemd-networkd[1467]: cali72b61c7082a: Gained IPv6LL Mar 13 00:40:40.051542 containerd[1555]: time="2026-03-13T00:40:40.051489008Z" level=info msg="Container 7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:40.074093 containerd[1555]: time="2026-03-13T00:40:40.074013434Z" level=info msg="CreateContainer within sandbox \"ac91856d1f67e8d92567f4aa524eadabccd3694be11f6f02e5528caa33605521\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f\"" Mar 13 00:40:40.076386 containerd[1555]: time="2026-03-13T00:40:40.075396804Z" level=info msg="StartContainer for \"7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f\"" Mar 13 00:40:40.079247 containerd[1555]: time="2026-03-13T00:40:40.079160729Z" level=info msg="connecting to shim 7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f" address="unix:///run/containerd/s/0bec895767ac364f0f08dfe9d10d29c18f93eae63922a199942d32b050cecf00" protocol=ttrpc version=3 Mar 13 00:40:40.092803 systemd[1]: Started cri-containerd-08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89.scope - libcontainer container 08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89. Mar 13 00:40:40.149058 systemd[1]: Started cri-containerd-7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f.scope - libcontainer container 7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f. Mar 13 00:40:40.226366 containerd[1555]: time="2026-03-13T00:40:40.226131178Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:40.227064 containerd[1555]: time="2026-03-13T00:40:40.227012929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:40:40.231810 containerd[1555]: time="2026-03-13T00:40:40.231704660Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 233.764386ms" Mar 13 00:40:40.231810 containerd[1555]: time="2026-03-13T00:40:40.231785261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:40:40.235701 containerd[1555]: time="2026-03-13T00:40:40.235648449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:40:40.242990 containerd[1555]: time="2026-03-13T00:40:40.242803968Z" level=info msg="CreateContainer within sandbox \"4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:40:40.252502 kubelet[2706]: E0313 00:40:40.252383 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:40.275236 containerd[1555]: time="2026-03-13T00:40:40.274479974Z" level=info msg="StartContainer for \"08e8b16c228501664fcb327f17cbe1c8ee1084c9d0f81dd8687721959111ad89\" returns successfully" Mar 13 00:40:40.298364 containerd[1555]: time="2026-03-13T00:40:40.298292170Z" level=info msg="Container 45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:40.303279 kubelet[2706]: I0313 00:40:40.303198 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-h2clt" podStartSLOduration=46.303179554 podStartE2EDuration="46.303179554s" podCreationTimestamp="2026-03-13 00:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:40.296472356 +0000 UTC m=+54.158463255" watchObservedRunningTime="2026-03-13 00:40:40.303179554 +0000 UTC m=+54.165170423" Mar 13 00:40:40.324481 containerd[1555]: time="2026-03-13T00:40:40.324104134Z" level=info msg="CreateContainer within sandbox \"4ae7125a7c896fbd06e47a29a70de0a1184e71b6452280d555a59706ca1ad000\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67\"" Mar 13 00:40:40.329286 containerd[1555]: time="2026-03-13T00:40:40.328596502Z" level=info msg="StartContainer for \"45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67\"" Mar 13 00:40:40.332442 containerd[1555]: time="2026-03-13T00:40:40.331122604Z" level=info msg="connecting to shim 45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67" address="unix:///run/containerd/s/16a23e3e11780b0cc8fb5cef171bc77ce675b37dee2855b45d51367546fb9bfb" protocol=ttrpc version=3 Mar 13 00:40:40.382637 containerd[1555]: time="2026-03-13T00:40:40.382561595Z" level=info msg="StartContainer for \"7f378eca8551c798eccc49cfc1a2851b8f9f8a6aa54b60c2897e07fcdcbe086f\" returns successfully" Mar 13 00:40:40.398733 systemd[1]: Started cri-containerd-45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67.scope - libcontainer container 45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67. Mar 13 00:40:40.482230 containerd[1555]: time="2026-03-13T00:40:40.482093827Z" level=info msg="StartContainer for \"45313b4ad182c9ba361122eb71384b989bae8386ca28472b430429fb42eb3c67\" returns successfully" Mar 13 00:40:40.580846 systemd-networkd[1467]: calid10becbd42c: Gained IPv6LL Mar 13 00:40:41.029669 systemd-networkd[1467]: cali77b3cf67a08: Gained IPv6LL Mar 13 00:40:41.276498 kubelet[2706]: E0313 00:40:41.275956 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:41.298482 kubelet[2706]: E0313 00:40:41.292775 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:41.300470 kubelet[2706]: I0313 00:40:41.299498 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b5f9d757-z28tt" podStartSLOduration=34.193703955 podStartE2EDuration="37.299480032s" podCreationTimestamp="2026-03-13 00:40:04 +0000 UTC" firstStartedPulling="2026-03-13 00:40:36.890758309 +0000 UTC m=+50.752749178" lastFinishedPulling="2026-03-13 00:40:39.996534387 +0000 UTC m=+53.858525255" observedRunningTime="2026-03-13 00:40:41.293676382 +0000 UTC m=+55.155667251" watchObservedRunningTime="2026-03-13 00:40:41.299480032 +0000 UTC m=+55.161470902" Mar 13 00:40:41.325658 kubelet[2706]: I0313 00:40:41.325598 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-7qrc5" podStartSLOduration=47.325575251 podStartE2EDuration="47.325575251s" podCreationTimestamp="2026-03-13 00:39:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:40:41.324553261 +0000 UTC m=+55.186544160" watchObservedRunningTime="2026-03-13 00:40:41.325575251 +0000 UTC m=+55.187566120" Mar 13 00:40:41.999179 kubelet[2706]: I0313 00:40:41.999123 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b5f9d757-5wn7z" podStartSLOduration=34.747199636 podStartE2EDuration="37.999105254s" podCreationTimestamp="2026-03-13 00:40:04 +0000 UTC" firstStartedPulling="2026-03-13 00:40:36.982159932 +0000 UTC m=+50.844150801" lastFinishedPulling="2026-03-13 00:40:40.23406555 +0000 UTC m=+54.096056419" observedRunningTime="2026-03-13 00:40:41.348350148 +0000 UTC m=+55.210341007" watchObservedRunningTime="2026-03-13 00:40:41.999105254 +0000 UTC m=+55.861096123" Mar 13 00:40:42.293735 kubelet[2706]: I0313 00:40:42.292753 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:40:42.296478 kubelet[2706]: E0313 00:40:42.295933 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:42.297127 kubelet[2706]: E0313 00:40:42.296799 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:42.565709 containerd[1555]: time="2026-03-13T00:40:42.565532427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:42.566692 containerd[1555]: time="2026-03-13T00:40:42.566658806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:40:42.568032 containerd[1555]: time="2026-03-13T00:40:42.567964696Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:42.570831 containerd[1555]: time="2026-03-13T00:40:42.570685120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:42.571865 containerd[1555]: time="2026-03-13T00:40:42.571273400Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.335563168s" Mar 13 00:40:42.571865 containerd[1555]: time="2026-03-13T00:40:42.571302879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:40:42.573250 containerd[1555]: time="2026-03-13T00:40:42.573183940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:40:42.591860 containerd[1555]: time="2026-03-13T00:40:42.591761919Z" level=info msg="CreateContainer within sandbox \"b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:40:42.611295 containerd[1555]: time="2026-03-13T00:40:42.610560936Z" level=info msg="Container a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:42.619869 containerd[1555]: time="2026-03-13T00:40:42.619779664Z" level=info msg="CreateContainer within sandbox \"b78c9bf4e7b934155c119ee4d9032751b657ab802744bc8376fa9a948afb5533\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d\"" Mar 13 00:40:42.620850 containerd[1555]: time="2026-03-13T00:40:42.620816608Z" level=info msg="StartContainer for \"a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d\"" Mar 13 00:40:42.622186 containerd[1555]: time="2026-03-13T00:40:42.622161986Z" level=info msg="connecting to shim a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d" address="unix:///run/containerd/s/3ccab395b90c876b2724ed00d1bd2a01c3359fbb5cfac9f4313ab72c3fad4d1c" protocol=ttrpc version=3 Mar 13 00:40:42.677702 systemd[1]: Started cri-containerd-a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d.scope - libcontainer container a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d. Mar 13 00:40:42.823541 containerd[1555]: time="2026-03-13T00:40:42.822368529Z" level=info msg="StartContainer for \"a134dda3af43d95ef522012f39e5662d15c12d8d8e13fb3d50844f8ea219de3d\" returns successfully" Mar 13 00:40:43.297488 kubelet[2706]: E0313 00:40:43.297317 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:44.418968 kubelet[2706]: I0313 00:40:44.418851 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c6bd96965-2jhx6" podStartSLOduration=34.829366309 podStartE2EDuration="39.418395227s" podCreationTimestamp="2026-03-13 00:40:05 +0000 UTC" firstStartedPulling="2026-03-13 00:40:37.983119566 +0000 UTC m=+51.845110435" lastFinishedPulling="2026-03-13 00:40:42.572148484 +0000 UTC m=+56.434139353" observedRunningTime="2026-03-13 00:40:43.309522685 +0000 UTC m=+57.171513565" watchObservedRunningTime="2026-03-13 00:40:44.418395227 +0000 UTC m=+58.280386126" Mar 13 00:40:44.699788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount735494559.mount: Deactivated successfully. Mar 13 00:40:45.145754 containerd[1555]: time="2026-03-13T00:40:45.145517137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:45.146504 containerd[1555]: time="2026-03-13T00:40:45.146431305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:40:45.148276 containerd[1555]: time="2026-03-13T00:40:45.147950024Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:45.151081 containerd[1555]: time="2026-03-13T00:40:45.150978955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:40:45.152101 containerd[1555]: time="2026-03-13T00:40:45.151979727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.578749415s" Mar 13 00:40:45.152101 containerd[1555]: time="2026-03-13T00:40:45.152046692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:40:45.156949 containerd[1555]: time="2026-03-13T00:40:45.156899795Z" level=info msg="CreateContainer within sandbox \"8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:40:45.168132 containerd[1555]: time="2026-03-13T00:40:45.168039286Z" level=info msg="Container 46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:40:45.184793 containerd[1555]: time="2026-03-13T00:40:45.184674961Z" level=info msg="CreateContainer within sandbox \"8673b31b474fdd2ebf5d13d3110b4926531b8f3cd2d579aac912aa351343ae8b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2\"" Mar 13 00:40:45.185649 containerd[1555]: time="2026-03-13T00:40:45.185623602Z" level=info msg="StartContainer for \"46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2\"" Mar 13 00:40:45.188063 containerd[1555]: time="2026-03-13T00:40:45.187977590Z" level=info msg="connecting to shim 46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2" address="unix:///run/containerd/s/b39c6ea17bbf51fc189701fca7a1fee1f731c25f5b4e99a1b7e561998d8bb889" protocol=ttrpc version=3 Mar 13 00:40:45.228860 systemd[1]: Started cri-containerd-46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2.scope - libcontainer container 46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2. Mar 13 00:40:45.305373 containerd[1555]: time="2026-03-13T00:40:45.305295069Z" level=info msg="StartContainer for \"46885ee80fc6c336ecd6a944c0e515291ca605763375e5b147120c10e82a87c2\" returns successfully" Mar 13 00:40:46.429134 kubelet[2706]: I0313 00:40:46.429050 2706 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-tp5kj" podStartSLOduration=35.273098768 podStartE2EDuration="41.429033136s" podCreationTimestamp="2026-03-13 00:40:05 +0000 UTC" firstStartedPulling="2026-03-13 00:40:38.996940142 +0000 UTC m=+52.858931010" lastFinishedPulling="2026-03-13 00:40:45.152874509 +0000 UTC m=+59.014865378" observedRunningTime="2026-03-13 00:40:46.326300273 +0000 UTC m=+60.188291142" watchObservedRunningTime="2026-03-13 00:40:46.429033136 +0000 UTC m=+60.291024006" Mar 13 00:40:49.203257 systemd[1]: Started sshd@7-10.0.0.55:22-10.0.0.1:36708.service - OpenSSH per-connection server daemon (10.0.0.1:36708). Mar 13 00:40:49.302259 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 36708 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:40:49.304981 sshd-session[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:49.314130 systemd-logind[1537]: New session 8 of user core. Mar 13 00:40:49.325601 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:40:49.563149 sshd[5357]: Connection closed by 10.0.0.1 port 36708 Mar 13 00:40:49.563470 sshd-session[5354]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:49.569516 systemd[1]: sshd@7-10.0.0.55:22-10.0.0.1:36708.service: Deactivated successfully. Mar 13 00:40:49.571869 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:40:49.572966 systemd-logind[1537]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:40:49.574851 systemd-logind[1537]: Removed session 8. Mar 13 00:40:54.582880 systemd[1]: Started sshd@8-10.0.0.55:22-10.0.0.1:36718.service - OpenSSH per-connection server daemon (10.0.0.1:36718). Mar 13 00:40:54.702916 sshd[5378]: Accepted publickey for core from 10.0.0.1 port 36718 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:40:54.706293 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:54.715068 systemd-logind[1537]: New session 9 of user core. Mar 13 00:40:54.724662 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:40:54.843879 sshd[5381]: Connection closed by 10.0.0.1 port 36718 Mar 13 00:40:54.844250 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Mar 13 00:40:54.850282 systemd[1]: sshd@8-10.0.0.55:22-10.0.0.1:36718.service: Deactivated successfully. Mar 13 00:40:54.854306 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:40:54.858743 systemd-logind[1537]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:40:54.863099 systemd-logind[1537]: Removed session 9. Mar 13 00:40:59.263402 kubelet[2706]: E0313 00:40:59.263210 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:40:59.873530 systemd[1]: Started sshd@9-10.0.0.55:22-10.0.0.1:43146.service - OpenSSH per-connection server daemon (10.0.0.1:43146). Mar 13 00:40:59.952783 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 43146 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:40:59.955176 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:40:59.963213 systemd-logind[1537]: New session 10 of user core. Mar 13 00:40:59.971741 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:41:00.112745 sshd[5406]: Connection closed by 10.0.0.1 port 43146 Mar 13 00:41:00.113221 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:00.119788 systemd[1]: sshd@9-10.0.0.55:22-10.0.0.1:43146.service: Deactivated successfully. Mar 13 00:41:00.123218 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:41:00.126299 systemd-logind[1537]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:41:00.130149 systemd-logind[1537]: Removed session 10. Mar 13 00:41:05.129276 systemd[1]: Started sshd@10-10.0.0.55:22-10.0.0.1:43158.service - OpenSSH per-connection server daemon (10.0.0.1:43158). Mar 13 00:41:05.234641 sshd[5420]: Accepted publickey for core from 10.0.0.1 port 43158 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:05.236293 sshd-session[5420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:05.244997 systemd-logind[1537]: New session 11 of user core. Mar 13 00:41:05.264739 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:41:05.402763 sshd[5423]: Connection closed by 10.0.0.1 port 43158 Mar 13 00:41:05.403172 sshd-session[5420]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:05.407952 systemd[1]: sshd@10-10.0.0.55:22-10.0.0.1:43158.service: Deactivated successfully. Mar 13 00:41:05.410584 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:41:05.411529 systemd-logind[1537]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:41:05.413535 systemd-logind[1537]: Removed session 11. Mar 13 00:41:07.262488 kubelet[2706]: E0313 00:41:07.262358 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:41:10.427633 systemd[1]: Started sshd@11-10.0.0.55:22-10.0.0.1:58210.service - OpenSSH per-connection server daemon (10.0.0.1:58210). Mar 13 00:41:10.509982 sshd[5477]: Accepted publickey for core from 10.0.0.1 port 58210 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:10.512333 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:10.519716 systemd-logind[1537]: New session 12 of user core. Mar 13 00:41:10.527811 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:41:10.668666 sshd[5480]: Connection closed by 10.0.0.1 port 58210 Mar 13 00:41:10.669124 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:10.675046 systemd[1]: sshd@11-10.0.0.55:22-10.0.0.1:58210.service: Deactivated successfully. Mar 13 00:41:10.678247 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:41:10.680024 systemd-logind[1537]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:41:10.682066 systemd-logind[1537]: Removed session 12. Mar 13 00:41:15.686479 systemd[1]: Started sshd@12-10.0.0.55:22-10.0.0.1:58224.service - OpenSSH per-connection server daemon (10.0.0.1:58224). Mar 13 00:41:15.758019 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 58224 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:15.760198 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:15.767243 systemd-logind[1537]: New session 13 of user core. Mar 13 00:41:15.774585 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:41:15.904659 sshd[5520]: Connection closed by 10.0.0.1 port 58224 Mar 13 00:41:15.905075 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:15.911258 systemd[1]: sshd@12-10.0.0.55:22-10.0.0.1:58224.service: Deactivated successfully. Mar 13 00:41:15.913660 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:41:15.915913 systemd-logind[1537]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:41:15.918197 systemd-logind[1537]: Removed session 13. Mar 13 00:41:18.391302 kubelet[2706]: I0313 00:41:18.391177 2706 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:41:20.262700 kubelet[2706]: E0313 00:41:20.262592 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:41:20.917263 systemd[1]: Started sshd@13-10.0.0.55:22-10.0.0.1:55500.service - OpenSSH per-connection server daemon (10.0.0.1:55500). Mar 13 00:41:21.000043 sshd[5611]: Accepted publickey for core from 10.0.0.1 port 55500 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:21.003071 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:21.010023 systemd-logind[1537]: New session 14 of user core. Mar 13 00:41:21.016657 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:41:21.205048 sshd[5614]: Connection closed by 10.0.0.1 port 55500 Mar 13 00:41:21.206272 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:21.216719 systemd[1]: sshd@13-10.0.0.55:22-10.0.0.1:55500.service: Deactivated successfully. Mar 13 00:41:21.219167 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:41:21.220639 systemd-logind[1537]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:41:21.224322 systemd[1]: Started sshd@14-10.0.0.55:22-10.0.0.1:55514.service - OpenSSH per-connection server daemon (10.0.0.1:55514). Mar 13 00:41:21.226492 systemd-logind[1537]: Removed session 14. Mar 13 00:41:21.320482 sshd[5629]: Accepted publickey for core from 10.0.0.1 port 55514 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:21.323907 sshd-session[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:21.332464 systemd-logind[1537]: New session 15 of user core. Mar 13 00:41:21.342841 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:41:21.496277 sshd[5632]: Connection closed by 10.0.0.1 port 55514 Mar 13 00:41:21.497064 sshd-session[5629]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:21.513812 systemd[1]: sshd@14-10.0.0.55:22-10.0.0.1:55514.service: Deactivated successfully. Mar 13 00:41:21.517725 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:41:21.520533 systemd-logind[1537]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:41:21.527102 systemd[1]: Started sshd@15-10.0.0.55:22-10.0.0.1:55524.service - OpenSSH per-connection server daemon (10.0.0.1:55524). Mar 13 00:41:21.529624 systemd-logind[1537]: Removed session 15. Mar 13 00:41:21.610806 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 55524 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:21.612909 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:21.619537 systemd-logind[1537]: New session 16 of user core. Mar 13 00:41:21.628630 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:41:21.737712 sshd[5646]: Connection closed by 10.0.0.1 port 55524 Mar 13 00:41:21.738143 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:21.745168 systemd[1]: sshd@15-10.0.0.55:22-10.0.0.1:55524.service: Deactivated successfully. Mar 13 00:41:21.747931 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:41:21.749694 systemd-logind[1537]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:41:21.751975 systemd-logind[1537]: Removed session 16. Mar 13 00:41:24.267126 kubelet[2706]: E0313 00:41:24.267038 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:41:26.263167 kubelet[2706]: E0313 00:41:26.263032 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:41:26.762585 systemd[1]: Started sshd@16-10.0.0.55:22-10.0.0.1:55534.service - OpenSSH per-connection server daemon (10.0.0.1:55534). Mar 13 00:41:26.838120 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 55534 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:26.841140 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:26.853110 systemd-logind[1537]: New session 17 of user core. Mar 13 00:41:26.863613 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:41:26.989644 sshd[5664]: Connection closed by 10.0.0.1 port 55534 Mar 13 00:41:26.990180 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:26.996765 systemd[1]: sshd@16-10.0.0.55:22-10.0.0.1:55534.service: Deactivated successfully. Mar 13 00:41:27.001192 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:41:27.002678 systemd-logind[1537]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:41:27.004486 systemd-logind[1537]: Removed session 17. Mar 13 00:41:32.004865 systemd[1]: Started sshd@17-10.0.0.55:22-10.0.0.1:53996.service - OpenSSH per-connection server daemon (10.0.0.1:53996). Mar 13 00:41:32.086914 sshd[5678]: Accepted publickey for core from 10.0.0.1 port 53996 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:32.088876 sshd-session[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:32.095081 systemd-logind[1537]: New session 18 of user core. Mar 13 00:41:32.102705 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:41:32.222660 sshd[5681]: Connection closed by 10.0.0.1 port 53996 Mar 13 00:41:32.223337 sshd-session[5678]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:32.238713 systemd[1]: sshd@17-10.0.0.55:22-10.0.0.1:53996.service: Deactivated successfully. Mar 13 00:41:32.242313 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:41:32.244594 systemd-logind[1537]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:41:32.250824 systemd[1]: Started sshd@18-10.0.0.55:22-10.0.0.1:54008.service - OpenSSH per-connection server daemon (10.0.0.1:54008). Mar 13 00:41:32.258206 systemd-logind[1537]: Removed session 18. Mar 13 00:41:32.315196 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 54008 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:32.317165 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:32.324375 systemd-logind[1537]: New session 19 of user core. Mar 13 00:41:32.337738 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:41:32.682508 sshd[5697]: Connection closed by 10.0.0.1 port 54008 Mar 13 00:41:32.683039 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:32.695932 systemd[1]: sshd@18-10.0.0.55:22-10.0.0.1:54008.service: Deactivated successfully. Mar 13 00:41:32.699233 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:41:32.701258 systemd-logind[1537]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:41:32.707037 systemd[1]: Started sshd@19-10.0.0.55:22-10.0.0.1:54016.service - OpenSSH per-connection server daemon (10.0.0.1:54016). Mar 13 00:41:32.708329 systemd-logind[1537]: Removed session 19. Mar 13 00:41:32.818035 sshd[5708]: Accepted publickey for core from 10.0.0.1 port 54016 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:32.821014 sshd-session[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:32.829507 systemd-logind[1537]: New session 20 of user core. Mar 13 00:41:32.838727 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:41:33.727606 sshd[5711]: Connection closed by 10.0.0.1 port 54016 Mar 13 00:41:33.727231 sshd-session[5708]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:33.741573 systemd[1]: sshd@19-10.0.0.55:22-10.0.0.1:54016.service: Deactivated successfully. Mar 13 00:41:33.749755 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:41:33.768659 systemd-logind[1537]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:41:33.774255 systemd-logind[1537]: Removed session 20. Mar 13 00:41:33.779881 systemd[1]: Started sshd@20-10.0.0.55:22-10.0.0.1:54024.service - OpenSSH per-connection server daemon (10.0.0.1:54024). Mar 13 00:41:33.891717 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 54024 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:33.893723 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:33.901198 systemd-logind[1537]: New session 21 of user core. Mar 13 00:41:33.913769 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:41:34.420094 sshd[5742]: Connection closed by 10.0.0.1 port 54024 Mar 13 00:41:34.420812 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:34.434804 systemd[1]: sshd@20-10.0.0.55:22-10.0.0.1:54024.service: Deactivated successfully. Mar 13 00:41:34.438892 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:41:34.444916 systemd-logind[1537]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:41:34.467359 systemd[1]: Started sshd@21-10.0.0.55:22-10.0.0.1:54030.service - OpenSSH per-connection server daemon (10.0.0.1:54030). Mar 13 00:41:34.476255 systemd-logind[1537]: Removed session 21. Mar 13 00:41:34.571746 sshd[5755]: Accepted publickey for core from 10.0.0.1 port 54030 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:34.573580 sshd-session[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:34.580810 systemd-logind[1537]: New session 22 of user core. Mar 13 00:41:34.586723 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:41:34.692060 sshd[5758]: Connection closed by 10.0.0.1 port 54030 Mar 13 00:41:34.692583 sshd-session[5755]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:34.696816 systemd[1]: sshd@21-10.0.0.55:22-10.0.0.1:54030.service: Deactivated successfully. Mar 13 00:41:34.699251 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:41:34.702345 systemd-logind[1537]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:41:34.704041 systemd-logind[1537]: Removed session 22. Mar 13 00:41:39.714042 systemd[1]: Started sshd@22-10.0.0.55:22-10.0.0.1:57112.service - OpenSSH per-connection server daemon (10.0.0.1:57112). Mar 13 00:41:39.795359 sshd[5820]: Accepted publickey for core from 10.0.0.1 port 57112 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:39.798600 sshd-session[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:39.809584 systemd-logind[1537]: New session 23 of user core. Mar 13 00:41:39.811744 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:41:39.928143 sshd[5823]: Connection closed by 10.0.0.1 port 57112 Mar 13 00:41:39.929672 sshd-session[5820]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:39.936024 systemd[1]: sshd@22-10.0.0.55:22-10.0.0.1:57112.service: Deactivated successfully. Mar 13 00:41:39.939005 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:41:39.941370 systemd-logind[1537]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:41:39.944162 systemd-logind[1537]: Removed session 23. Mar 13 00:41:44.944965 systemd[1]: Started sshd@23-10.0.0.55:22-10.0.0.1:57118.service - OpenSSH per-connection server daemon (10.0.0.1:57118). Mar 13 00:41:45.024751 sshd[5861]: Accepted publickey for core from 10.0.0.1 port 57118 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:45.026578 sshd-session[5861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:45.033219 systemd-logind[1537]: New session 24 of user core. Mar 13 00:41:45.044754 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:41:45.150875 sshd[5864]: Connection closed by 10.0.0.1 port 57118 Mar 13 00:41:45.151629 sshd-session[5861]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:45.157911 systemd[1]: sshd@23-10.0.0.55:22-10.0.0.1:57118.service: Deactivated successfully. Mar 13 00:41:45.161275 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:41:45.163244 systemd-logind[1537]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:41:45.165884 systemd-logind[1537]: Removed session 24. Mar 13 00:41:47.263070 kubelet[2706]: E0313 00:41:47.262358 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 13 00:41:50.178808 systemd[1]: Started sshd@24-10.0.0.55:22-10.0.0.1:59494.service - OpenSSH per-connection server daemon (10.0.0.1:59494). Mar 13 00:41:50.288047 sshd[5914]: Accepted publickey for core from 10.0.0.1 port 59494 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:50.290239 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:50.298244 systemd-logind[1537]: New session 25 of user core. Mar 13 00:41:50.310869 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 13 00:41:50.468644 sshd[5917]: Connection closed by 10.0.0.1 port 59494 Mar 13 00:41:50.469272 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:50.475093 systemd[1]: sshd@24-10.0.0.55:22-10.0.0.1:59494.service: Deactivated successfully. Mar 13 00:41:50.478643 systemd[1]: session-25.scope: Deactivated successfully. Mar 13 00:41:50.481990 systemd-logind[1537]: Session 25 logged out. Waiting for processes to exit. Mar 13 00:41:50.484530 systemd-logind[1537]: Removed session 25. Mar 13 00:41:55.499031 systemd[1]: Started sshd@25-10.0.0.55:22-10.0.0.1:59500.service - OpenSSH per-connection server daemon (10.0.0.1:59500). Mar 13 00:41:55.594760 sshd[5930]: Accepted publickey for core from 10.0.0.1 port 59500 ssh2: RSA SHA256:eWtiGYQHnkjgJlXJLLPoGwe2+/3lLXbpacmtzaUtKgo Mar 13 00:41:55.596790 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:41:55.603885 systemd-logind[1537]: New session 26 of user core. Mar 13 00:41:55.612669 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 13 00:41:55.729045 sshd[5933]: Connection closed by 10.0.0.1 port 59500 Mar 13 00:41:55.729569 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Mar 13 00:41:55.734662 systemd[1]: sshd@25-10.0.0.55:22-10.0.0.1:59500.service: Deactivated successfully. Mar 13 00:41:55.737672 systemd[1]: session-26.scope: Deactivated successfully. Mar 13 00:41:55.740184 systemd-logind[1537]: Session 26 logged out. Waiting for processes to exit. Mar 13 00:41:55.743600 systemd-logind[1537]: Removed session 26.