Mar 3 13:49:42.621927 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 10:59:45 -00 2026 Mar 3 13:49:42.621967 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:49:42.621984 kernel: BIOS-provided physical RAM map: Mar 3 13:49:42.621994 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 3 13:49:42.622003 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 3 13:49:42.622012 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 3 13:49:42.622024 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 3 13:49:42.622035 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 3 13:49:42.622092 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 3 13:49:42.622102 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 3 13:49:42.622110 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 3 13:49:42.622124 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 3 13:49:42.622132 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 3 13:49:42.622140 kernel: NX (Execute Disable) protection: active Mar 3 13:49:42.622151 kernel: APIC: Static calls initialized Mar 3 13:49:42.622163 kernel: SMBIOS 2.8 present. Mar 3 13:49:42.622273 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 3 13:49:42.622284 kernel: DMI: Memory slots populated: 1/1 Mar 3 13:49:42.622294 kernel: Hypervisor detected: KVM Mar 3 13:49:42.622305 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 3 13:49:42.622316 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 3 13:49:42.622325 kernel: kvm-clock: using sched offset of 33660349161 cycles Mar 3 13:49:42.622334 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 3 13:49:42.622344 kernel: tsc: Detected 2445.426 MHz processor Mar 3 13:49:42.622353 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 3 13:49:42.622363 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 3 13:49:42.622481 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 3 13:49:42.622491 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 3 13:49:42.622500 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 3 13:49:42.622509 kernel: Using GB pages for direct mapping Mar 3 13:49:42.622518 kernel: ACPI: Early table checksum verification disabled Mar 3 13:49:42.622527 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 3 13:49:42.622537 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622547 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622560 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622575 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 3 13:49:42.622584 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622594 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622603 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622612 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 3 13:49:42.622627 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 3 13:49:42.622641 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 3 13:49:42.622650 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 3 13:49:42.622660 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 3 13:49:42.622671 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 3 13:49:42.622682 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 3 13:49:42.622692 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 3 13:49:42.622703 kernel: No NUMA configuration found Mar 3 13:49:42.622714 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 3 13:49:42.622729 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 3 13:49:42.622740 kernel: Zone ranges: Mar 3 13:49:42.622751 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 3 13:49:42.622762 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 3 13:49:42.622773 kernel: Normal empty Mar 3 13:49:42.622784 kernel: Device empty Mar 3 13:49:42.622794 kernel: Movable zone start for each node Mar 3 13:49:42.622805 kernel: Early memory node ranges Mar 3 13:49:42.622816 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 3 13:49:42.622826 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 3 13:49:42.622841 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 3 13:49:42.622905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 3 13:49:42.622918 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 3 13:49:42.622965 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 3 13:49:42.622977 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 3 13:49:42.622988 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 3 13:49:42.622999 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 3 13:49:42.623009 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 3 13:49:42.623065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 3 13:49:42.623081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 3 13:49:42.623091 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 3 13:49:42.623100 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 3 13:49:42.623110 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 3 13:49:42.623119 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 3 13:49:42.623128 kernel: TSC deadline timer available Mar 3 13:49:42.623138 kernel: CPU topo: Max. logical packages: 1 Mar 3 13:49:42.623150 kernel: CPU topo: Max. logical dies: 1 Mar 3 13:49:42.623162 kernel: CPU topo: Max. dies per package: 1 Mar 3 13:49:42.623176 kernel: CPU topo: Max. threads per core: 1 Mar 3 13:49:42.623186 kernel: CPU topo: Num. cores per package: 4 Mar 3 13:49:42.623195 kernel: CPU topo: Num. threads per package: 4 Mar 3 13:49:42.623204 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 3 13:49:42.623213 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 3 13:49:42.623288 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 3 13:49:42.623298 kernel: kvm-guest: setup PV sched yield Mar 3 13:49:42.623308 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 3 13:49:42.623317 kernel: Booting paravirtualized kernel on KVM Mar 3 13:49:42.623327 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 3 13:49:42.623342 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 3 13:49:42.623351 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 3 13:49:42.623361 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 3 13:49:42.623370 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 3 13:49:42.623475 kernel: kvm-guest: PV spinlocks enabled Mar 3 13:49:42.623487 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 3 13:49:42.623499 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:49:42.623510 kernel: random: crng init done Mar 3 13:49:42.623526 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 3 13:49:42.623537 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 3 13:49:42.623548 kernel: Fallback order for Node 0: 0 Mar 3 13:49:42.623559 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 3 13:49:42.623569 kernel: Policy zone: DMA32 Mar 3 13:49:42.623580 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 3 13:49:42.623591 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 3 13:49:42.623602 kernel: ftrace: allocating 40099 entries in 157 pages Mar 3 13:49:42.623613 kernel: ftrace: allocated 157 pages with 5 groups Mar 3 13:49:42.623628 kernel: Dynamic Preempt: voluntary Mar 3 13:49:42.623639 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 3 13:49:42.623651 kernel: rcu: RCU event tracing is enabled. Mar 3 13:49:42.623663 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 3 13:49:42.623674 kernel: Trampoline variant of Tasks RCU enabled. Mar 3 13:49:42.623726 kernel: Rude variant of Tasks RCU enabled. Mar 3 13:49:42.623738 kernel: Tracing variant of Tasks RCU enabled. Mar 3 13:49:42.623749 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 3 13:49:42.623760 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 3 13:49:42.623771 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:49:42.623787 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:49:42.623798 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 3 13:49:42.623809 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 3 13:49:42.623820 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 3 13:49:42.623842 kernel: Console: colour VGA+ 80x25 Mar 3 13:49:42.623857 kernel: printk: legacy console [ttyS0] enabled Mar 3 13:49:42.623868 kernel: ACPI: Core revision 20240827 Mar 3 13:49:42.623880 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 3 13:49:42.623891 kernel: APIC: Switch to symmetric I/O mode setup Mar 3 13:49:42.623902 kernel: x2apic enabled Mar 3 13:49:42.623914 kernel: APIC: Switched APIC routing to: physical x2apic Mar 3 13:49:42.623970 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 3 13:49:42.623983 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 3 13:49:42.623994 kernel: kvm-guest: setup PV IPIs Mar 3 13:49:42.624005 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 3 13:49:42.624017 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:49:42.624036 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 3 13:49:42.624046 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 3 13:49:42.624056 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 3 13:49:42.624066 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 3 13:49:42.624077 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 3 13:49:42.624087 kernel: Spectre V2 : Mitigation: Retpolines Mar 3 13:49:42.624097 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 3 13:49:42.624106 kernel: Speculative Store Bypass: Vulnerable Mar 3 13:49:42.624116 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 3 13:49:42.624132 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 3 13:49:42.624141 kernel: active return thunk: srso_alias_return_thunk Mar 3 13:49:42.624154 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 3 13:49:42.624167 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 3 13:49:42.624177 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 3 13:49:42.624187 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 3 13:49:42.624197 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 3 13:49:42.624207 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 3 13:49:42.624285 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 3 13:49:42.624299 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 3 13:49:42.624309 kernel: Freeing SMP alternatives memory: 32K Mar 3 13:49:42.624318 kernel: pid_max: default: 32768 minimum: 301 Mar 3 13:49:42.624328 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 3 13:49:42.624338 kernel: landlock: Up and running. Mar 3 13:49:42.624348 kernel: SELinux: Initializing. Mar 3 13:49:42.624357 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:49:42.624368 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 3 13:49:42.624513 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 3 13:49:42.624526 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 3 13:49:42.624537 kernel: signal: max sigframe size: 1776 Mar 3 13:49:42.624549 kernel: rcu: Hierarchical SRCU implementation. Mar 3 13:49:42.624561 kernel: rcu: Max phase no-delay instances is 400. Mar 3 13:49:42.624572 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 3 13:49:42.624584 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 3 13:49:42.624596 kernel: smp: Bringing up secondary CPUs ... Mar 3 13:49:42.624607 kernel: smpboot: x86: Booting SMP configuration: Mar 3 13:49:42.624623 kernel: .... node #0, CPUs: #1 #2 #3 Mar 3 13:49:42.624635 kernel: smp: Brought up 1 node, 4 CPUs Mar 3 13:49:42.624646 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 3 13:49:42.624659 kernel: Memory: 2420716K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 145096K reserved, 0K cma-reserved) Mar 3 13:49:42.624670 kernel: devtmpfs: initialized Mar 3 13:49:42.624681 kernel: x86/mm: Memory block size: 128MB Mar 3 13:49:42.624693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 3 13:49:42.624705 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 3 13:49:42.624716 kernel: pinctrl core: initialized pinctrl subsystem Mar 3 13:49:42.624732 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 3 13:49:42.624744 kernel: audit: initializing netlink subsys (disabled) Mar 3 13:49:42.624755 kernel: audit: type=2000 audit(1772545774.486:1): state=initialized audit_enabled=0 res=1 Mar 3 13:49:42.624767 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 3 13:49:42.624778 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 3 13:49:42.624789 kernel: cpuidle: using governor menu Mar 3 13:49:42.624801 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 3 13:49:42.624855 kernel: dca service started, version 1.12.1 Mar 3 13:49:42.624869 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 3 13:49:42.624886 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 3 13:49:42.624898 kernel: PCI: Using configuration type 1 for base access Mar 3 13:49:42.624909 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 3 13:49:42.624920 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 3 13:49:42.624932 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 3 13:49:42.624944 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 3 13:49:42.624955 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 3 13:49:42.624966 kernel: ACPI: Added _OSI(Module Device) Mar 3 13:49:42.625064 kernel: ACPI: Added _OSI(Processor Device) Mar 3 13:49:42.625597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 3 13:49:42.625615 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 3 13:49:42.625627 kernel: ACPI: Interpreter enabled Mar 3 13:49:42.625639 kernel: ACPI: PM: (supports S0 S3 S5) Mar 3 13:49:42.625650 kernel: ACPI: Using IOAPIC for interrupt routing Mar 3 13:49:42.625662 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 3 13:49:42.625673 kernel: PCI: Using E820 reservations for host bridge windows Mar 3 13:49:42.625739 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 3 13:49:42.625752 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 3 13:49:42.626340 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 3 13:49:42.626717 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 3 13:49:42.626911 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 3 13:49:42.626927 kernel: PCI host bridge to bus 0000:00 Mar 3 13:49:42.627301 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 3 13:49:42.627612 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 3 13:49:42.627852 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 3 13:49:42.628025 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 3 13:49:42.628206 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 3 13:49:42.628555 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 3 13:49:42.628731 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 3 13:49:42.629175 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 3 13:49:42.629758 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 3 13:49:42.629920 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 3 13:49:42.630107 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 3 13:49:42.630361 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 3 13:49:42.630663 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 3 13:49:42.631021 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 3 13:49:42.631281 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 3 13:49:42.631593 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 3 13:49:42.631832 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 3 13:49:42.632135 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 3 13:49:42.632607 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 3 13:49:42.632908 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 3 13:49:42.633100 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 3 13:49:42.633583 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 3 13:49:42.633790 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 3 13:49:42.634043 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 3 13:49:42.634308 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 3 13:49:42.634609 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 3 13:49:42.634903 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 3 13:49:42.635096 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 3 13:49:42.635597 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 3 13:49:42.635799 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 3 13:49:42.635984 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 3 13:49:42.636485 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 3 13:49:42.636681 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 3 13:49:42.636697 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 3 13:49:42.636709 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 3 13:49:42.636721 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 3 13:49:42.636739 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 3 13:49:42.636750 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 3 13:49:42.636761 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 3 13:49:42.636773 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 3 13:49:42.636784 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 3 13:49:42.636796 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 3 13:49:42.636807 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 3 13:49:42.636818 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 3 13:49:42.636830 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 3 13:49:42.636846 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 3 13:49:42.636858 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 3 13:49:42.636869 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 3 13:49:42.636880 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 3 13:49:42.636892 kernel: iommu: Default domain type: Translated Mar 3 13:49:42.636903 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 3 13:49:42.636914 kernel: PCI: Using ACPI for IRQ routing Mar 3 13:49:42.636925 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 3 13:49:42.636937 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 3 13:49:42.636952 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 3 13:49:42.637281 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 3 13:49:42.637694 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 3 13:49:42.637882 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 3 13:49:42.637898 kernel: vgaarb: loaded Mar 3 13:49:42.637910 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 3 13:49:42.637922 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 3 13:49:42.637933 kernel: clocksource: Switched to clocksource kvm-clock Mar 3 13:49:42.637944 kernel: VFS: Disk quotas dquot_6.6.0 Mar 3 13:49:42.637962 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 3 13:49:42.637973 kernel: pnp: PnP ACPI init Mar 3 13:49:42.638570 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 3 13:49:42.638591 kernel: pnp: PnP ACPI: found 6 devices Mar 3 13:49:42.638604 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 3 13:49:42.638615 kernel: NET: Registered PF_INET protocol family Mar 3 13:49:42.638627 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 3 13:49:42.638639 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 3 13:49:42.638656 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 3 13:49:42.638668 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 3 13:49:42.638679 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 3 13:49:42.638691 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 3 13:49:42.638702 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:49:42.638714 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 3 13:49:42.638725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 3 13:49:42.638737 kernel: NET: Registered PF_XDP protocol family Mar 3 13:49:42.638929 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 3 13:49:42.639108 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 3 13:49:42.639349 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 3 13:49:42.639670 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 3 13:49:42.639844 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 3 13:49:42.640013 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 3 13:49:42.640030 kernel: PCI: CLS 0 bytes, default 64 Mar 3 13:49:42.640044 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 3 13:49:42.640054 kernel: Initialise system trusted keyrings Mar 3 13:49:42.640072 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 3 13:49:42.640082 kernel: Key type asymmetric registered Mar 3 13:49:42.640092 kernel: Asymmetric key parser 'x509' registered Mar 3 13:49:42.640102 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 3 13:49:42.640112 kernel: io scheduler mq-deadline registered Mar 3 13:49:42.640122 kernel: io scheduler kyber registered Mar 3 13:49:42.640132 kernel: io scheduler bfq registered Mar 3 13:49:42.640142 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 3 13:49:42.640155 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 3 13:49:42.640173 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 3 13:49:42.640184 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 3 13:49:42.640194 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 3 13:49:42.640204 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 3 13:49:42.640213 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 3 13:49:42.640290 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 3 13:49:42.640303 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 3 13:49:42.640795 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 3 13:49:42.640988 kernel: rtc_cmos 00:04: registered as rtc0 Mar 3 13:49:42.641171 kernel: rtc_cmos 00:04: setting system clock to 2026-03-03T13:49:41 UTC (1772545781) Mar 3 13:49:42.641188 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 3 13:49:42.641527 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 3 13:49:42.641545 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 3 13:49:42.641557 kernel: NET: Registered PF_INET6 protocol family Mar 3 13:49:42.641568 kernel: Segment Routing with IPv6 Mar 3 13:49:42.641579 kernel: In-situ OAM (IOAM) with IPv6 Mar 3 13:49:42.641591 kernel: NET: Registered PF_PACKET protocol family Mar 3 13:49:42.641608 kernel: Key type dns_resolver registered Mar 3 13:49:42.641620 kernel: IPI shorthand broadcast: enabled Mar 3 13:49:42.641631 kernel: sched_clock: Marking stable (5676043545, 1260761569)->(7496617254, -559812140) Mar 3 13:49:42.641642 kernel: registered taskstats version 1 Mar 3 13:49:42.641654 kernel: Loading compiled-in X.509 certificates Mar 3 13:49:42.641666 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: bf135b2a3d3664cc6742f4e1848867384c1e52f1' Mar 3 13:49:42.641677 kernel: Demotion targets for Node 0: null Mar 3 13:49:42.641688 kernel: Key type .fscrypt registered Mar 3 13:49:42.641699 kernel: Key type fscrypt-provisioning registered Mar 3 13:49:42.641715 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 3 13:49:42.641727 kernel: ima: Allocated hash algorithm: sha1 Mar 3 13:49:42.641738 kernel: ima: No architecture policies found Mar 3 13:49:42.641749 kernel: clk: Disabling unused clocks Mar 3 13:49:42.641761 kernel: Warning: unable to open an initial console. Mar 3 13:49:42.641773 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 3 13:49:42.641785 kernel: Write protecting the kernel read-only data: 40960k Mar 3 13:49:42.641796 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 3 13:49:42.641811 kernel: Run /init as init process Mar 3 13:49:42.641823 kernel: with arguments: Mar 3 13:49:42.641834 kernel: /init Mar 3 13:49:42.641846 kernel: with environment: Mar 3 13:49:42.641857 kernel: HOME=/ Mar 3 13:49:42.641868 kernel: TERM=linux Mar 3 13:49:42.641881 systemd[1]: Successfully made /usr/ read-only. Mar 3 13:49:42.641896 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:49:42.641913 systemd[1]: Detected virtualization kvm. Mar 3 13:49:42.641924 systemd[1]: Detected architecture x86-64. Mar 3 13:49:42.641936 systemd[1]: Running in initrd. Mar 3 13:49:42.641948 systemd[1]: No hostname configured, using default hostname. Mar 3 13:49:42.641960 systemd[1]: Hostname set to . Mar 3 13:49:42.641971 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:49:42.641983 systemd[1]: Queued start job for default target initrd.target. Mar 3 13:49:42.641995 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:49:42.642025 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:49:42.642045 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 3 13:49:42.642056 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:49:42.642067 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 3 13:49:42.642079 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 3 13:49:42.642096 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 3 13:49:42.642107 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 3 13:49:42.642118 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:49:42.642129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:49:42.642140 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:49:42.642152 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:49:42.642167 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:49:42.642179 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:49:42.642195 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:49:42.642206 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:49:42.642276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 3 13:49:42.642296 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 3 13:49:42.642310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:49:42.642321 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:49:42.642332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:49:42.642343 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:49:42.642354 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 3 13:49:42.642369 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:49:42.642478 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 3 13:49:42.642489 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 3 13:49:42.642500 systemd[1]: Starting systemd-fsck-usr.service... Mar 3 13:49:42.642511 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:49:42.642523 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:49:42.642534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:49:42.642586 systemd-journald[202]: Collecting audit messages is disabled. Mar 3 13:49:42.642619 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 3 13:49:42.642640 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:49:42.642653 systemd-journald[202]: Journal started Mar 3 13:49:42.642681 systemd-journald[202]: Runtime Journal (/run/log/journal/16a4679d27ab41b58be1cec0f5f146a2) is 6M, max 48.3M, 42.2M free. Mar 3 13:49:42.617916 systemd-modules-load[204]: Inserted module 'overlay' Mar 3 13:49:42.653670 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:49:42.659155 systemd[1]: Finished systemd-fsck-usr.service. Mar 3 13:49:42.668498 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:49:42.670717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:49:42.705507 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 3 13:49:42.707295 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 3 13:49:42.990791 kernel: Bridge firewalling registered Mar 3 13:49:42.708637 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:49:42.715917 systemd-tmpfiles[213]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 3 13:49:42.991716 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:49:43.004925 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:49:43.044106 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:49:43.071596 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:49:43.087988 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:49:43.098445 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 3 13:49:43.126351 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:49:43.131962 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:49:43.145101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:49:43.156659 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 3 13:49:43.165024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:49:43.214753 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=51ade538e3d3c371f07ae1ec6fa9803fff0566ec060cf4b56dc685fc36d0e01c Mar 3 13:49:43.239616 systemd-resolved[245]: Positive Trust Anchors: Mar 3 13:49:43.239685 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:49:43.239728 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:49:43.244877 systemd-resolved[245]: Defaulting to hostname 'linux'. Mar 3 13:49:43.247370 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:49:43.249075 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:49:43.435586 kernel: SCSI subsystem initialized Mar 3 13:49:43.447687 kernel: Loading iSCSI transport class v2.0-870. Mar 3 13:49:43.462565 kernel: iscsi: registered transport (tcp) Mar 3 13:49:43.491473 kernel: iscsi: registered transport (qla4xxx) Mar 3 13:49:43.491510 kernel: QLogic iSCSI HBA Driver Mar 3 13:49:43.524817 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:49:43.563536 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:49:43.577778 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:49:43.649637 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 3 13:49:43.652191 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 3 13:49:43.736484 kernel: raid6: avx2x4 gen() 29043 MB/s Mar 3 13:49:43.755485 kernel: raid6: avx2x2 gen() 27259 MB/s Mar 3 13:49:43.777346 kernel: raid6: avx2x1 gen() 19548 MB/s Mar 3 13:49:43.777452 kernel: raid6: using algorithm avx2x4 gen() 29043 MB/s Mar 3 13:49:43.799464 kernel: raid6: .... xor() 4802 MB/s, rmw enabled Mar 3 13:49:43.799499 kernel: raid6: using avx2x2 recovery algorithm Mar 3 13:49:43.824522 kernel: xor: automatically using best checksumming function avx Mar 3 13:49:44.034513 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 3 13:49:44.046836 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:49:44.059300 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:49:44.122315 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 3 13:49:44.130335 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:49:44.139744 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 3 13:49:44.162099 kernel: hrtimer: interrupt took 4219949 ns Mar 3 13:49:44.229799 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Mar 3 13:49:44.375333 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:49:44.446921 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:49:44.596534 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:49:44.613784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 3 13:49:44.684049 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 3 13:49:44.689714 kernel: cryptd: max_cpu_qlen set to 1000 Mar 3 13:49:44.696478 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 3 13:49:44.725641 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:49:44.729069 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:49:44.735778 kernel: libata version 3.00 loaded. Mar 3 13:49:44.759785 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 3 13:49:44.759884 kernel: GPT:9289727 != 19775487 Mar 3 13:49:44.759903 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 3 13:49:44.759914 kernel: GPT:9289727 != 19775487 Mar 3 13:49:44.759924 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 3 13:49:44.759933 kernel: AES CTR mode by8 optimization enabled Mar 3 13:49:44.770330 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:49:44.771095 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:49:44.786758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:49:44.801964 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:49:44.819341 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 3 13:49:44.828787 kernel: ahci 0000:00:1f.2: version 3.0 Mar 3 13:49:44.829013 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 3 13:49:44.844184 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 3 13:49:44.844554 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 3 13:49:44.844723 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 3 13:49:44.876697 kernel: scsi host0: ahci Mar 3 13:49:44.881502 kernel: scsi host1: ahci Mar 3 13:49:44.901871 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 3 13:49:44.913469 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 3 13:49:44.932873 kernel: scsi host2: ahci Mar 3 13:49:44.934628 kernel: scsi host3: ahci Mar 3 13:49:44.924452 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:49:44.934077 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 3 13:49:44.934993 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 3 13:49:44.941594 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 3 13:49:44.991575 kernel: scsi host4: ahci Mar 3 13:49:44.992573 kernel: scsi host5: ahci Mar 3 13:49:44.992900 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 3 13:49:44.992916 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 3 13:49:44.992954 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 3 13:49:44.992965 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 3 13:49:44.992976 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 3 13:49:44.992986 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 3 13:49:45.065370 disk-uuid[615]: Primary Header is updated. Mar 3 13:49:45.065370 disk-uuid[615]: Secondary Entries is updated. Mar 3 13:49:45.065370 disk-uuid[615]: Secondary Header is updated. Mar 3 13:49:45.275974 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:49:45.276001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:49:45.262052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:49:45.310467 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 3 13:49:45.310507 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 3 13:49:45.312589 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 3 13:49:45.321499 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 3 13:49:45.326722 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 3 13:49:45.332885 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 3 13:49:45.332914 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:49:45.332930 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 3 13:49:45.335697 kernel: ata3.00: applying bridge limits Mar 3 13:49:45.340927 kernel: ata3.00: LPM support broken, forcing max_power Mar 3 13:49:45.340958 kernel: ata3.00: configured for UDMA/100 Mar 3 13:49:45.351542 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 3 13:49:45.423816 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 3 13:49:45.424164 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 3 13:49:45.441458 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 3 13:49:45.793995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 3 13:49:45.799033 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:49:45.802742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:49:45.812200 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:49:45.822732 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 3 13:49:45.864978 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:49:46.083503 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 3 13:49:46.084544 disk-uuid[617]: The operation has completed successfully. Mar 3 13:49:46.129863 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 3 13:49:46.130118 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 3 13:49:46.169021 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 3 13:49:46.208978 sh[646]: Success Mar 3 13:49:46.243574 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 3 13:49:46.243675 kernel: device-mapper: uevent: version 1.0.3 Mar 3 13:49:46.249660 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 3 13:49:46.272576 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 3 13:49:46.336625 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 3 13:49:46.346076 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 3 13:49:46.388852 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 3 13:49:46.417610 kernel: BTRFS: device fsid f550cb98-648e-4600-9237-4b15eb09827b devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (658) Mar 3 13:49:46.424479 kernel: BTRFS info (device dm-0): first mount of filesystem f550cb98-648e-4600-9237-4b15eb09827b Mar 3 13:49:46.430519 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:49:46.452080 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 3 13:49:46.452119 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 3 13:49:46.455099 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 3 13:49:46.455972 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:49:46.475957 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 3 13:49:46.477689 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 3 13:49:46.508528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 3 13:49:46.558576 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (681) Mar 3 13:49:46.568700 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:49:46.568748 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:49:46.580624 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:49:46.580661 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:49:46.593497 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:49:46.596277 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 3 13:49:46.604085 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 3 13:49:47.029298 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:49:47.047713 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:49:47.199360 systemd-networkd[829]: lo: Link UP Mar 3 13:49:47.199490 systemd-networkd[829]: lo: Gained carrier Mar 3 13:49:47.202178 systemd-networkd[829]: Enumeration completed Mar 3 13:49:47.202361 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:49:47.203466 systemd-networkd[829]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:49:47.203472 systemd-networkd[829]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:49:47.210166 systemd-networkd[829]: eth0: Link UP Mar 3 13:49:47.210585 systemd-networkd[829]: eth0: Gained carrier Mar 3 13:49:47.210603 systemd-networkd[829]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:49:47.214589 systemd[1]: Reached target network.target - Network. Mar 3 13:49:47.286582 systemd-networkd[829]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:49:47.669601 ignition[733]: Ignition 2.22.0 Mar 3 13:49:47.669699 ignition[733]: Stage: fetch-offline Mar 3 13:49:47.669806 ignition[733]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:49:47.669822 ignition[733]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:49:47.670145 ignition[733]: parsed url from cmdline: "" Mar 3 13:49:47.670152 ignition[733]: no config URL provided Mar 3 13:49:47.670161 ignition[733]: reading system config file "/usr/lib/ignition/user.ign" Mar 3 13:49:47.670175 ignition[733]: no config at "/usr/lib/ignition/user.ign" Mar 3 13:49:47.670327 ignition[733]: op(1): [started] loading QEMU firmware config module Mar 3 13:49:47.670336 ignition[733]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 3 13:49:47.686825 ignition[733]: op(1): [finished] loading QEMU firmware config module Mar 3 13:49:48.191741 ignition[733]: parsing config with SHA512: c053b55e1c8647c48918f19b4b27759b7219ccbfa648341ea9558a729c901599480b081c20b737e1a0225d9e44f531c93374775275f3ccb76e081de5aafe51a1 Mar 3 13:49:48.216168 unknown[733]: fetched base config from "system" Mar 3 13:49:48.216249 unknown[733]: fetched user config from "qemu" Mar 3 13:49:48.216746 ignition[733]: fetch-offline: fetch-offline passed Mar 3 13:49:48.218522 ignition[733]: Ignition finished successfully Mar 3 13:49:48.270690 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:49:48.286492 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 3 13:49:48.294827 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 3 13:49:48.729820 systemd-networkd[829]: eth0: Gained IPv6LL Mar 3 13:49:50.392896 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 2401361277 wd_nsec: 2401361067 Mar 3 13:49:50.530359 ignition[842]: Ignition 2.22.0 Mar 3 13:49:50.530620 ignition[842]: Stage: kargs Mar 3 13:49:50.538309 ignition[842]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:49:50.538475 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:49:50.551488 ignition[842]: kargs: kargs passed Mar 3 13:49:50.551611 ignition[842]: Ignition finished successfully Mar 3 13:49:50.569178 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 3 13:49:50.575560 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 3 13:49:50.681645 ignition[850]: Ignition 2.22.0 Mar 3 13:49:50.681707 ignition[850]: Stage: disks Mar 3 13:49:50.681896 ignition[850]: no configs at "/usr/lib/ignition/base.d" Mar 3 13:49:50.681915 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:49:50.683563 ignition[850]: disks: disks passed Mar 3 13:49:50.683653 ignition[850]: Ignition finished successfully Mar 3 13:49:50.706760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 3 13:49:50.707484 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 3 13:49:50.716305 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 3 13:49:50.737704 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:49:50.743726 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:49:50.743843 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:49:50.771920 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 3 13:49:50.818698 systemd-fsck[860]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 3 13:49:50.827186 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 3 13:49:50.829123 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 3 13:49:51.044571 kernel: EXT4-fs (vda9): mounted filesystem f0c751de-febc-4e57-b330-c926d38ed5ec r/w with ordered data mode. Quota mode: none. Mar 3 13:49:51.045801 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 3 13:49:51.050097 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 3 13:49:51.063853 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:49:51.091947 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 3 13:49:51.122932 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (869) Mar 3 13:49:51.122957 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:49:51.122969 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:49:51.098157 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 3 13:49:51.143939 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:49:51.143970 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:49:51.098454 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 3 13:49:51.098500 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:49:51.125103 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 3 13:49:51.145340 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:49:51.158292 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 3 13:49:51.374711 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory Mar 3 13:49:51.386859 initrd-setup-root[900]: cut: /sysroot/etc/group: No such file or directory Mar 3 13:49:51.401276 initrd-setup-root[907]: cut: /sysroot/etc/shadow: No such file or directory Mar 3 13:49:51.410105 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Mar 3 13:49:51.983270 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 3 13:49:51.990958 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 3 13:49:51.998513 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 3 13:49:52.044010 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 3 13:49:52.052075 kernel: BTRFS info (device vda6): last unmount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:49:52.082124 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 3 13:49:52.170982 ignition[983]: INFO : Ignition 2.22.0 Mar 3 13:49:52.170982 ignition[983]: INFO : Stage: mount Mar 3 13:49:52.177641 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:49:52.177641 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:49:52.177641 ignition[983]: INFO : mount: mount passed Mar 3 13:49:52.177641 ignition[983]: INFO : Ignition finished successfully Mar 3 13:49:52.194914 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 3 13:49:52.205668 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 3 13:49:52.241019 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 3 13:49:52.295625 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (995) Mar 3 13:49:52.308767 kernel: BTRFS info (device vda6): first mount of filesystem af9be1e8-b0f0-42a3-a696-521642a3b9f8 Mar 3 13:49:52.308811 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 3 13:49:52.324749 kernel: BTRFS info (device vda6): turning on async discard Mar 3 13:49:52.324795 kernel: BTRFS info (device vda6): enabling free space tree Mar 3 13:49:52.327900 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 3 13:49:52.393019 ignition[1012]: INFO : Ignition 2.22.0 Mar 3 13:49:52.393019 ignition[1012]: INFO : Stage: files Mar 3 13:49:52.399027 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:49:52.399027 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:49:52.407648 ignition[1012]: DEBUG : files: compiled without relabeling support, skipping Mar 3 13:49:52.414969 ignition[1012]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 3 13:49:52.414969 ignition[1012]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 3 13:49:52.431944 ignition[1012]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 3 13:49:52.438135 ignition[1012]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 3 13:49:52.444260 unknown[1012]: wrote ssh authorized keys file for user: core Mar 3 13:49:52.449049 ignition[1012]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 3 13:49:52.458822 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:49:52.470004 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 3 13:49:52.533782 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 3 13:49:52.959694 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 3 13:49:52.959694 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 3 13:49:52.983313 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 3 13:49:52.983313 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:49:53.004919 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 3 13:49:53.385162 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 3 13:49:56.216145 ignition[1012]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 3 13:49:56.216145 ignition[1012]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 3 13:49:56.236815 ignition[1012]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 3 13:49:56.248065 ignition[1012]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 3 13:49:56.339747 ignition[1012]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:49:56.367732 ignition[1012]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 3 13:49:56.375176 ignition[1012]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 3 13:49:56.375176 ignition[1012]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 3 13:49:56.389796 ignition[1012]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 3 13:49:56.396172 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:49:56.396172 ignition[1012]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 3 13:49:56.396172 ignition[1012]: INFO : files: files passed Mar 3 13:49:56.396172 ignition[1012]: INFO : Ignition finished successfully Mar 3 13:49:56.395895 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 3 13:49:56.429815 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 3 13:49:56.442720 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 3 13:49:56.469350 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 3 13:49:56.469629 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 3 13:49:56.484847 initrd-setup-root-after-ignition[1041]: grep: /sysroot/oem/oem-release: No such file or directory Mar 3 13:49:56.492990 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:49:56.492990 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:49:56.486836 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:49:56.524170 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 3 13:49:56.500707 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 3 13:49:56.519882 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 3 13:49:56.591021 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 3 13:49:56.591367 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 3 13:49:56.596144 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 3 13:49:56.614626 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 3 13:49:56.614976 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 3 13:49:56.616722 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 3 13:49:56.752602 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:49:56.785759 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 3 13:49:57.094917 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:49:57.117056 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:49:57.135942 systemd[1]: Stopped target timers.target - Timer Units. Mar 3 13:49:57.165590 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 3 13:49:57.168695 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 3 13:49:57.191729 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 3 13:49:57.204082 systemd[1]: Stopped target basic.target - Basic System. Mar 3 13:49:57.232989 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 3 13:49:57.239993 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 3 13:49:57.288708 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 3 13:49:57.312068 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 3 13:49:57.321541 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 3 13:49:57.334949 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 3 13:49:57.346132 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 3 13:49:57.358304 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 3 13:49:57.375535 systemd[1]: Stopped target swap.target - Swaps. Mar 3 13:49:57.379841 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 3 13:49:57.380133 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 3 13:49:57.392367 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:49:57.401133 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:49:57.405977 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 3 13:49:57.406691 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:49:57.424728 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 3 13:49:57.424869 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 3 13:49:57.439074 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 3 13:49:57.439276 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 3 13:49:57.443861 systemd[1]: Stopped target paths.target - Path Units. Mar 3 13:49:57.453839 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 3 13:49:57.481551 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:49:57.486624 systemd[1]: Stopped target slices.target - Slice Units. Mar 3 13:49:57.495720 systemd[1]: Stopped target sockets.target - Socket Units. Mar 3 13:49:57.502989 systemd[1]: iscsid.socket: Deactivated successfully. Mar 3 13:49:57.503135 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 3 13:49:57.507103 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 3 13:49:57.507202 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 3 13:49:57.515321 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 3 13:49:57.515625 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 3 13:49:57.523673 systemd[1]: ignition-files.service: Deactivated successfully. Mar 3 13:49:57.523853 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 3 13:49:57.534293 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 3 13:49:57.543340 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 3 13:49:57.573629 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 3 13:49:57.573827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:49:57.582012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 3 13:49:57.582221 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 3 13:49:57.600676 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 3 13:49:57.604927 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 3 13:49:57.638085 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 3 13:49:57.643791 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 3 13:49:57.643945 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 3 13:49:57.674507 ignition[1068]: INFO : Ignition 2.22.0 Mar 3 13:49:57.674507 ignition[1068]: INFO : Stage: umount Mar 3 13:49:57.681949 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 3 13:49:57.681949 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 3 13:49:57.681949 ignition[1068]: INFO : umount: umount passed Mar 3 13:49:57.681949 ignition[1068]: INFO : Ignition finished successfully Mar 3 13:49:57.679041 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 3 13:49:57.679346 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 3 13:49:57.692159 systemd[1]: Stopped target network.target - Network. Mar 3 13:49:57.704061 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 3 13:49:57.704164 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 3 13:49:57.716607 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 3 13:49:57.716695 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 3 13:49:57.721031 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 3 13:49:57.721108 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 3 13:49:57.737222 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 3 13:49:57.737364 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 3 13:49:57.742485 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 3 13:49:57.742562 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 3 13:49:57.750489 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 3 13:49:57.763193 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 3 13:49:57.773352 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 3 13:49:57.773915 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 3 13:49:57.798923 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 3 13:49:57.799794 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 3 13:49:57.799896 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:49:57.813369 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 3 13:49:57.855757 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 3 13:49:57.856089 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 3 13:49:57.867641 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 3 13:49:57.867945 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 3 13:49:57.881224 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 3 13:49:57.881324 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:49:57.898994 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 3 13:49:57.903184 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 3 13:49:57.903293 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 3 13:49:57.916160 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 3 13:49:57.916222 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:49:57.939778 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 3 13:49:57.939875 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 3 13:49:57.949097 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:49:57.958661 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 3 13:49:57.983639 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 3 13:49:57.984665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:49:57.988936 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 3 13:49:57.988991 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 3 13:49:58.004642 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 3 13:49:58.004684 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:49:58.013897 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 3 13:49:58.013969 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 3 13:49:58.028569 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 3 13:49:58.028661 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 3 13:49:58.041015 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 3 13:49:58.041117 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 3 13:49:58.058922 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 3 13:49:58.069314 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 3 13:49:58.069485 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:49:58.083869 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 3 13:49:58.083942 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:49:58.098770 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 3 13:49:58.098851 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:49:58.116570 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 3 13:49:58.116633 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:49:58.126327 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 3 13:49:58.126548 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:49:58.144061 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 3 13:49:58.144319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 3 13:49:58.148678 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 3 13:49:58.148825 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 3 13:49:58.162842 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 3 13:49:58.174837 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 3 13:49:58.205087 systemd[1]: Switching root. Mar 3 13:49:58.294884 systemd-journald[202]: Journal stopped Mar 3 13:50:01.119840 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 3 13:50:01.119929 kernel: SELinux: policy capability network_peer_controls=1 Mar 3 13:50:01.119960 kernel: SELinux: policy capability open_perms=1 Mar 3 13:50:01.119977 kernel: SELinux: policy capability extended_socket_class=1 Mar 3 13:50:01.119996 kernel: SELinux: policy capability always_check_network=0 Mar 3 13:50:01.120057 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 3 13:50:01.120115 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 3 13:50:01.120134 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 3 13:50:01.120162 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 3 13:50:01.120180 kernel: SELinux: policy capability userspace_initial_context=0 Mar 3 13:50:01.120196 kernel: audit: type=1403 audit(1772545798.600:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 3 13:50:01.120214 systemd[1]: Successfully loaded SELinux policy in 110.011ms. Mar 3 13:50:01.120359 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.722ms. Mar 3 13:50:01.120476 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 3 13:50:01.120497 systemd[1]: Detected virtualization kvm. Mar 3 13:50:01.120516 systemd[1]: Detected architecture x86-64. Mar 3 13:50:01.120540 systemd[1]: Detected first boot. Mar 3 13:50:01.120559 systemd[1]: Initializing machine ID from VM UUID. Mar 3 13:50:01.120637 zram_generator::config[1114]: No configuration found. Mar 3 13:50:01.120703 kernel: Guest personality initialized and is inactive Mar 3 13:50:01.120721 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 3 13:50:01.120738 kernel: Initialized host personality Mar 3 13:50:01.120756 kernel: NET: Registered PF_VSOCK protocol family Mar 3 13:50:01.120775 systemd[1]: Populated /etc with preset unit settings. Mar 3 13:50:01.120799 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 3 13:50:01.120822 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 3 13:50:01.120841 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 3 13:50:01.120860 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 3 13:50:01.120879 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 3 13:50:01.120897 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 3 13:50:01.120914 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 3 13:50:01.120932 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 3 13:50:01.120950 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 3 13:50:01.120969 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 3 13:50:01.120991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 3 13:50:01.121008 systemd[1]: Created slice user.slice - User and Session Slice. Mar 3 13:50:01.121026 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 3 13:50:01.121044 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 3 13:50:01.121062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 3 13:50:01.121153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 3 13:50:01.121176 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 3 13:50:01.121200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 3 13:50:01.121218 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 3 13:50:01.121237 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 3 13:50:01.121316 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 3 13:50:01.121336 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 3 13:50:01.121354 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 3 13:50:01.121467 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 3 13:50:01.121492 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 3 13:50:01.121510 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 3 13:50:01.121541 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 3 13:50:01.121559 systemd[1]: Reached target slices.target - Slice Units. Mar 3 13:50:01.121577 systemd[1]: Reached target swap.target - Swaps. Mar 3 13:50:01.121595 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 3 13:50:01.121615 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 3 13:50:01.121630 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 3 13:50:01.121649 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 3 13:50:01.121719 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 3 13:50:01.121739 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 3 13:50:01.121803 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 3 13:50:01.121872 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 3 13:50:01.121892 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 3 13:50:01.121911 systemd[1]: Mounting media.mount - External Media Directory... Mar 3 13:50:01.121929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:50:01.121946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 3 13:50:01.121963 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 3 13:50:01.121982 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 3 13:50:01.122002 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 3 13:50:01.122022 systemd[1]: Reached target machines.target - Containers. Mar 3 13:50:01.122041 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 3 13:50:01.122060 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:50:01.122076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 3 13:50:01.122094 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 3 13:50:01.122112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:50:01.122130 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:50:01.122146 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:50:01.122168 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 3 13:50:01.122186 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:50:01.122205 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 3 13:50:01.122223 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 3 13:50:01.122290 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 3 13:50:01.122367 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 3 13:50:01.122476 systemd[1]: Stopped systemd-fsck-usr.service. Mar 3 13:50:01.122497 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:50:01.122521 kernel: fuse: init (API version 7.41) Mar 3 13:50:01.122540 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 3 13:50:01.122558 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 3 13:50:01.122575 kernel: ACPI: bus type drm_connector registered Mar 3 13:50:01.122594 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 3 13:50:01.122612 kernel: loop: module loaded Mar 3 13:50:01.122628 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 3 13:50:01.122646 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 3 13:50:01.122664 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 3 13:50:01.122765 systemd-journald[1192]: Collecting audit messages is disabled. Mar 3 13:50:01.122803 systemd[1]: verity-setup.service: Deactivated successfully. Mar 3 13:50:01.122822 systemd-journald[1192]: Journal started Mar 3 13:50:01.122858 systemd-journald[1192]: Runtime Journal (/run/log/journal/16a4679d27ab41b58be1cec0f5f146a2) is 6M, max 48.3M, 42.2M free. Mar 3 13:49:59.794858 systemd[1]: Queued start job for default target multi-user.target. Mar 3 13:49:59.811550 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 3 13:49:59.812659 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 3 13:49:59.813716 systemd[1]: systemd-journald.service: Consumed 1.235s CPU time. Mar 3 13:50:01.126162 systemd[1]: Stopped verity-setup.service. Mar 3 13:50:01.137573 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:50:01.145581 systemd[1]: Started systemd-journald.service - Journal Service. Mar 3 13:50:01.150535 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 3 13:50:01.157580 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 3 13:50:01.163826 systemd[1]: Mounted media.mount - External Media Directory. Mar 3 13:50:01.174889 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 3 13:50:01.180689 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 3 13:50:01.185326 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 3 13:50:01.189839 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 3 13:50:01.195019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 3 13:50:01.200171 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 3 13:50:01.200798 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 3 13:50:01.206877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:50:01.207338 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:50:01.212987 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:50:01.213698 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:50:01.219000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:50:01.219690 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:50:01.225135 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 3 13:50:01.225702 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 3 13:50:01.231001 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:50:01.231573 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:50:01.236877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 3 13:50:01.242729 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 3 13:50:01.248879 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 3 13:50:01.255796 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 3 13:50:01.284219 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 3 13:50:01.292763 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 3 13:50:01.313567 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 3 13:50:01.321151 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 3 13:50:01.321190 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 3 13:50:01.329604 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 3 13:50:01.340047 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 3 13:50:01.347527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:50:01.350961 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 3 13:50:01.363975 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 3 13:50:01.371896 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:50:01.374948 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 3 13:50:01.382183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:50:01.390826 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 3 13:50:01.404547 systemd-journald[1192]: Time spent on flushing to /var/log/journal/16a4679d27ab41b58be1cec0f5f146a2 is 38.390ms for 974 entries. Mar 3 13:50:01.404547 systemd-journald[1192]: System Journal (/var/log/journal/16a4679d27ab41b58be1cec0f5f146a2) is 8M, max 195.6M, 187.6M free. Mar 3 13:50:01.466488 systemd-journald[1192]: Received client request to flush runtime journal. Mar 3 13:50:01.468700 kernel: loop0: detected capacity change from 0 to 128560 Mar 3 13:50:01.405616 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 3 13:50:01.427635 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 3 13:50:01.438115 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 3 13:50:01.449331 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 3 13:50:01.456308 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 3 13:50:01.462704 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 3 13:50:01.470665 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 3 13:50:01.480564 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 3 13:50:01.486218 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 3 13:50:01.503304 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 3 13:50:01.503318 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Mar 3 13:50:01.503513 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Mar 3 13:50:01.514504 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 3 13:50:01.520715 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 3 13:50:01.539923 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 3 13:50:01.547820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 3 13:50:01.550175 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 3 13:50:01.575475 kernel: loop1: detected capacity change from 0 to 110984 Mar 3 13:50:01.625470 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 3 13:50:01.635748 kernel: loop2: detected capacity change from 0 to 219192 Mar 3 13:50:01.640671 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 3 13:50:01.683489 kernel: loop3: detected capacity change from 0 to 128560 Mar 3 13:50:01.684331 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 3 13:50:01.684356 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Mar 3 13:50:01.692787 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 3 13:50:01.714598 kernel: loop4: detected capacity change from 0 to 110984 Mar 3 13:50:01.748454 kernel: loop5: detected capacity change from 0 to 219192 Mar 3 13:50:01.775037 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 3 13:50:01.775963 (sd-merge)[1260]: Merged extensions into '/usr'. Mar 3 13:50:01.784608 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Mar 3 13:50:01.784763 systemd[1]: Reloading... Mar 3 13:50:01.871489 zram_generator::config[1285]: No configuration found. Mar 3 13:50:01.988098 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 3 13:50:02.202916 systemd[1]: Reloading finished in 417 ms. Mar 3 13:50:02.250052 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 3 13:50:02.254992 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 3 13:50:02.260300 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 3 13:50:02.289211 systemd[1]: Starting ensure-sysext.service... Mar 3 13:50:02.294202 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 3 13:50:02.315114 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 3 13:50:02.338013 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 3 13:50:02.338121 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 3 13:50:02.338678 systemd[1]: Reload requested from client PID 1326 ('systemctl') (unit ensure-sysext.service)... Mar 3 13:50:02.338693 systemd[1]: Reloading... Mar 3 13:50:02.339125 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 3 13:50:02.339841 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 3 13:50:02.341134 systemd-tmpfiles[1327]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 3 13:50:02.341747 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 3 13:50:02.341924 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Mar 3 13:50:02.351649 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:50:02.351800 systemd-tmpfiles[1327]: Skipping /boot Mar 3 13:50:02.365455 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Mar 3 13:50:02.376166 systemd-tmpfiles[1327]: Detected autofs mount point /boot during canonicalization of boot. Mar 3 13:50:02.376186 systemd-tmpfiles[1327]: Skipping /boot Mar 3 13:50:02.427555 zram_generator::config[1359]: No configuration found. Mar 3 13:50:02.655679 kernel: mousedev: PS/2 mouse device common for all mice Mar 3 13:50:02.681620 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 3 13:50:02.694680 kernel: ACPI: button: Power Button [PWRF] Mar 3 13:50:02.716469 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 3 13:50:02.721787 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 3 13:50:02.731695 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 3 13:50:02.732102 systemd[1]: Reloading finished in 392 ms. Mar 3 13:50:02.744824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 3 13:50:02.752929 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 3 13:50:02.830894 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 3 13:50:02.846841 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:50:02.854869 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:50:02.863815 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 3 13:50:02.869501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:50:02.871658 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:50:02.885140 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:50:02.889956 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:50:02.898434 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:50:02.900979 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 3 13:50:02.907908 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:50:02.909844 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 3 13:50:02.913687 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 3 13:50:02.925791 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 3 13:50:02.933901 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 3 13:50:02.939614 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:50:02.943854 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:50:02.944538 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:50:02.945166 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:50:02.946048 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:50:02.951957 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:50:02.952341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:50:02.969227 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:50:02.970165 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 3 13:50:02.976792 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 3 13:50:02.985711 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 3 13:50:02.999820 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 3 13:50:03.009091 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 3 13:50:03.013615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 3 13:50:03.013779 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 3 13:50:03.020982 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 3 13:50:03.025609 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 3 13:50:03.031978 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 3 13:50:03.038779 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 3 13:50:03.045079 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 3 13:50:03.046224 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 3 13:50:03.052242 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 3 13:50:03.052831 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 3 13:50:03.060612 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 3 13:50:03.063044 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 3 13:50:03.071236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 3 13:50:03.071668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 3 13:50:03.107554 systemd[1]: Finished ensure-sysext.service. Mar 3 13:50:03.149058 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 3 13:50:03.170580 augenrules[1488]: No rules Mar 3 13:50:03.172310 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 3 13:50:03.173538 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 3 13:50:03.194913 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 3 13:50:03.210940 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 3 13:50:03.230709 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 3 13:50:03.250347 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:50:03.253164 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:50:03.273569 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 3 13:50:03.297723 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 3 13:50:03.338697 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 3 13:50:03.661320 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 3 13:50:05.108087 systemd-networkd[1451]: lo: Link UP Mar 3 13:50:05.108187 systemd-networkd[1451]: lo: Gained carrier Mar 3 13:50:05.119790 systemd-networkd[1451]: Enumeration completed Mar 3 13:50:05.120052 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 3 13:50:05.123894 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:50:05.123993 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 3 13:50:05.132875 systemd-networkd[1451]: eth0: Link UP Mar 3 13:50:05.136627 systemd-networkd[1451]: eth0: Gained carrier Mar 3 13:50:05.136654 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 3 13:50:05.231628 systemd-resolved[1452]: Positive Trust Anchors: Mar 3 13:50:05.232117 systemd-resolved[1452]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 3 13:50:05.232223 systemd-resolved[1452]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 3 13:50:05.245842 systemd-resolved[1452]: Defaulting to hostname 'linux'. Mar 3 13:50:05.255295 systemd-networkd[1451]: eth0: DHCPv4 address 10.0.0.111/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 3 13:50:05.259745 systemd-timesyncd[1495]: Network configuration changed, trying to establish connection. Mar 3 13:50:05.266594 systemd-timesyncd[1495]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 3 13:50:05.266932 systemd-timesyncd[1495]: Initial clock synchronization to Tue 2026-03-03 13:50:05.414045 UTC. Mar 3 13:50:05.504207 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 3 13:50:05.512648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 3 13:50:05.524200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 3 13:50:05.537247 systemd[1]: Reached target network.target - Network. Mar 3 13:50:05.545980 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 3 13:50:05.555251 systemd[1]: Reached target sysinit.target - System Initialization. Mar 3 13:50:05.564952 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 3 13:50:05.581197 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 3 13:50:05.597151 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 3 13:50:05.606615 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 3 13:50:05.616503 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 3 13:50:05.616610 systemd[1]: Reached target paths.target - Path Units. Mar 3 13:50:05.626861 systemd[1]: Reached target time-set.target - System Time Set. Mar 3 13:50:05.639112 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 3 13:50:05.654610 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 3 13:50:05.675125 systemd[1]: Reached target timers.target - Timer Units. Mar 3 13:50:05.688631 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 3 13:50:05.720367 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 3 13:50:05.745245 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 3 13:50:05.762078 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 3 13:50:05.786665 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 3 13:50:05.819010 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 3 13:50:05.831470 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 3 13:50:05.854635 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 3 13:50:05.873626 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 3 13:50:05.893001 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 3 13:50:05.914902 systemd[1]: Reached target sockets.target - Socket Units. Mar 3 13:50:05.932543 systemd[1]: Reached target basic.target - Basic System. Mar 3 13:50:05.954881 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:50:05.955053 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 3 13:50:05.961583 systemd[1]: Starting containerd.service - containerd container runtime... Mar 3 13:50:05.993209 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 3 13:50:06.008749 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 3 13:50:06.043689 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 3 13:50:06.073828 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 3 13:50:06.094300 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 3 13:50:06.105940 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 3 13:50:06.132156 jq[1523]: false Mar 3 13:50:06.139184 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 3 13:50:06.155721 extend-filesystems[1524]: Found /dev/vda6 Mar 3 13:50:06.159578 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 3 13:50:06.179947 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 3 13:50:06.196536 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 3 13:50:06.210570 extend-filesystems[1524]: Found /dev/vda9 Mar 3 13:50:06.216914 extend-filesystems[1524]: Checking size of /dev/vda9 Mar 3 13:50:06.278584 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 3 13:50:06.291215 systemd-networkd[1451]: eth0: Gained IPv6LL Mar 3 13:50:06.296525 extend-filesystems[1524]: Resized partition /dev/vda9 Mar 3 13:50:06.297587 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 3 13:50:06.309105 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing passwd entry cache Mar 3 13:50:06.303325 oslogin_cache_refresh[1525]: Refreshing passwd entry cache Mar 3 13:50:06.298577 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 3 13:50:06.312537 systemd[1]: Starting update-engine.service - Update Engine... Mar 3 13:50:06.313698 extend-filesystems[1548]: resize2fs 1.47.3 (8-Jul-2025) Mar 3 13:50:06.337885 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 3 13:50:06.335882 oslogin_cache_refresh[1525]: Failure getting users, quitting Mar 3 13:50:06.338044 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting users, quitting Mar 3 13:50:06.338044 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:50:06.338044 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Refreshing group entry cache Mar 3 13:50:06.332906 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 3 13:50:06.335911 oslogin_cache_refresh[1525]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 3 13:50:06.335992 oslogin_cache_refresh[1525]: Refreshing group entry cache Mar 3 13:50:06.354147 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Failure getting groups, quitting Mar 3 13:50:06.355530 oslogin_cache_refresh[1525]: Failure getting groups, quitting Mar 3 13:50:06.357057 google_oslogin_nss_cache[1525]: oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:50:06.355558 oslogin_cache_refresh[1525]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 3 13:50:06.359896 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 3 13:50:06.371361 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 3 13:50:06.382373 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 3 13:50:06.387503 jq[1549]: true Mar 3 13:50:06.396307 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 3 13:50:06.397750 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 3 13:50:06.398258 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 3 13:50:06.398971 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 3 13:50:06.410081 systemd[1]: motdgen.service: Deactivated successfully. Mar 3 13:50:06.410860 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 3 13:50:06.421758 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 3 13:50:06.422230 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 3 13:50:06.452100 update_engine[1546]: I20260303 13:50:06.434137 1546 main.cc:92] Flatcar Update Engine starting Mar 3 13:50:06.471984 systemd[1]: Reached target network-online.target - Network is Online. Mar 3 13:50:06.486754 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 3 13:50:06.496766 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:50:06.514008 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 3 13:50:06.569163 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 3 13:50:06.569336 tar[1552]: linux-amd64/LICENSE Mar 3 13:50:06.571538 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 3 13:50:06.571538 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 3 13:50:06.571538 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 3 13:50:06.675696 kernel: kvm_amd: TSC scaling supported Mar 3 13:50:06.675755 kernel: kvm_amd: Nested Virtualization enabled Mar 3 13:50:06.675782 kernel: kvm_amd: Nested Paging enabled Mar 3 13:50:06.675805 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 3 13:50:06.675829 kernel: kvm_amd: PMU virtualization is disabled Mar 3 13:50:06.583137 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 3 13:50:06.676134 jq[1553]: true Mar 3 13:50:06.676289 update_engine[1546]: I20260303 13:50:06.647998 1546 update_check_scheduler.cc:74] Next update check in 7m40s Mar 3 13:50:06.676358 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Mar 3 13:50:06.636735 dbus-daemon[1521]: [system] SELinux support is enabled Mar 3 13:50:06.702214 tar[1552]: linux-amd64/helm Mar 3 13:50:06.583720 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 3 13:50:06.624323 systemd-logind[1542]: Watching system buttons on /dev/input/event2 (Power Button) Mar 3 13:50:06.624363 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 3 13:50:06.631189 systemd-logind[1542]: New seat seat0. Mar 3 13:50:06.653982 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 3 13:50:06.673292 systemd[1]: Started systemd-logind.service - User Login Management. Mar 3 13:50:06.674336 (ntainerd)[1567]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 3 13:50:06.681577 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 3 13:50:06.719109 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 3 13:50:06.719726 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 3 13:50:06.720220 dbus-daemon[1521]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 3 13:50:06.749966 systemd[1]: Started update-engine.service - Update Engine. Mar 3 13:50:06.759752 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 3 13:50:06.761139 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 3 13:50:06.761381 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 3 13:50:06.768086 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 3 13:50:06.768192 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 3 13:50:06.780244 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 3 13:50:06.810229 sshd_keygen[1550]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 3 13:50:13.607591 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 3 13:50:13.685773 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 3 13:50:13.752999 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Mar 3 13:50:13.791338 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 3 13:50:13.816267 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 3 13:50:13.841847 locksmithd[1600]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 3 13:50:13.842116 systemd[1]: issuegen.service: Deactivated successfully. Mar 3 13:50:13.842964 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 3 13:50:13.887971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 3 13:50:14.015175 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 3 13:50:14.030119 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 3 13:50:14.042264 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 3 13:50:14.052162 systemd[1]: Reached target getty.target - Login Prompts. Mar 3 13:50:14.118815 kernel: EDAC MC: Ver: 3.0.0 Mar 3 13:50:14.589569 tar[1552]: linux-amd64/README.md Mar 3 13:50:14.681252 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 3 13:50:15.152098 containerd[1567]: time="2026-03-03T13:50:15Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 3 13:50:15.175246 containerd[1567]: time="2026-03-03T13:50:15.174958508Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.275056085Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="86.202µs" Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.275268305Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.275348693Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.275850983Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.275876392Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.275916359Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.276013251Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 3 13:50:15.276499 containerd[1567]: time="2026-03-03T13:50:15.276030600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:50:15.277284 containerd[1567]: time="2026-03-03T13:50:15.277255610Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 3 13:50:15.277368 containerd[1567]: time="2026-03-03T13:50:15.277350951Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:50:15.277624 containerd[1567]: time="2026-03-03T13:50:15.277554384Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 3 13:50:15.277690 containerd[1567]: time="2026-03-03T13:50:15.277674791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 3 13:50:15.277865 containerd[1567]: time="2026-03-03T13:50:15.277844443Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 3 13:50:15.278917 containerd[1567]: time="2026-03-03T13:50:15.278857527Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:50:15.279003 containerd[1567]: time="2026-03-03T13:50:15.278987192Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 3 13:50:15.279050 containerd[1567]: time="2026-03-03T13:50:15.279038040Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 3 13:50:15.279517 containerd[1567]: time="2026-03-03T13:50:15.279364330Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 3 13:50:15.280763 containerd[1567]: time="2026-03-03T13:50:15.280735276Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 3 13:50:15.281042 containerd[1567]: time="2026-03-03T13:50:15.281018163Z" level=info msg="metadata content store policy set" policy=shared Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302131042Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302548257Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302580840Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302599741Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302616536Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302629755Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302645239Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302660584Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302678748Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302695936Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302711069Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 3 13:50:15.303102 containerd[1567]: time="2026-03-03T13:50:15.302728580Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303355074Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303478151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303498965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303509876Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303519619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303529120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303539980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303549299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303560121Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303569581Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303578205Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303698581Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303817618Z" level=info msg="Start snapshots syncer" Mar 3 13:50:15.309093 containerd[1567]: time="2026-03-03T13:50:15.303891317Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 3 13:50:15.311843 containerd[1567]: time="2026-03-03T13:50:15.304914806Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 3 13:50:15.311843 containerd[1567]: time="2026-03-03T13:50:15.305035192Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305569500Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305751445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305772592Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305783715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305793719Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305805296Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305814776Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305824528Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305848377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305858220Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.305869100Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.306335644Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.306364690Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 3 13:50:15.312782 containerd[1567]: time="2026-03-03T13:50:15.306475817Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306490386Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306498436Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306507201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306525638Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306659243Z" level=info msg="runtime interface created" Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306666436Z" level=info msg="created NRI interface" Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306674719Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306686496Z" level=info msg="Connect containerd service" Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.306747178Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 3 13:50:15.313371 containerd[1567]: time="2026-03-03T13:50:15.310519925Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 3 13:50:15.616717 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 3 13:50:15.628193 systemd[1]: Started sshd@0-10.0.0.111:22-10.0.0.1:52312.service - OpenSSH per-connection server daemon (10.0.0.1:52312). Mar 3 13:50:16.216145 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 52312 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:16.223353 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:16.248866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 3 13:50:16.258964 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 3 13:50:16.297211 systemd-logind[1542]: New session 1 of user core. Mar 3 13:50:17.447373 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 3 13:50:17.467634 containerd[1567]: time="2026-03-03T13:50:17.467303268Z" level=info msg="Start subscribing containerd event" Mar 3 13:50:17.468277 containerd[1567]: time="2026-03-03T13:50:17.467723751Z" level=info msg="Start recovering state" Mar 3 13:50:17.472556 containerd[1567]: time="2026-03-03T13:50:17.469063502Z" level=info msg="Start event monitor" Mar 3 13:50:17.472556 containerd[1567]: time="2026-03-03T13:50:17.472552918Z" level=info msg="Start cni network conf syncer for default" Mar 3 13:50:17.472556 containerd[1567]: time="2026-03-03T13:50:17.472684365Z" level=info msg="Start streaming server" Mar 3 13:50:17.473039 containerd[1567]: time="2026-03-03T13:50:17.472702275Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 3 13:50:17.473039 containerd[1567]: time="2026-03-03T13:50:17.472927569Z" level=info msg="runtime interface starting up..." Mar 3 13:50:17.473039 containerd[1567]: time="2026-03-03T13:50:17.472947049Z" level=info msg="starting plugins..." Mar 3 13:50:17.473039 containerd[1567]: time="2026-03-03T13:50:17.472981571Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 3 13:50:17.474720 containerd[1567]: time="2026-03-03T13:50:17.474687616Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 3 13:50:17.475002 containerd[1567]: time="2026-03-03T13:50:17.474973755Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 3 13:50:17.475587 containerd[1567]: time="2026-03-03T13:50:17.475560911Z" level=info msg="containerd successfully booted in 2.326857s" Mar 3 13:50:17.490842 systemd[1]: Started containerd.service - containerd container runtime. Mar 3 13:50:17.519608 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 3 13:50:17.726692 (systemd)[1659]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 3 13:50:17.745516 systemd-logind[1542]: New session c1 of user core. Mar 3 13:50:18.634099 systemd[1659]: Queued start job for default target default.target. Mar 3 13:50:18.659082 systemd[1659]: Created slice app.slice - User Application Slice. Mar 3 13:50:18.659227 systemd[1659]: Reached target paths.target - Paths. Mar 3 13:50:18.659952 systemd[1659]: Reached target timers.target - Timers. Mar 3 13:50:18.665979 systemd[1659]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 3 13:50:18.739114 systemd[1659]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 3 13:50:18.739492 systemd[1659]: Reached target sockets.target - Sockets. Mar 3 13:50:18.739563 systemd[1659]: Reached target basic.target - Basic System. Mar 3 13:50:18.739633 systemd[1659]: Reached target default.target - Main User Target. Mar 3 13:50:18.739743 systemd[1659]: Startup finished in 949ms. Mar 3 13:50:18.740115 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 3 13:50:18.850200 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 3 13:50:19.102675 systemd[1]: Started sshd@1-10.0.0.111:22-10.0.0.1:52316.service - OpenSSH per-connection server daemon (10.0.0.1:52316). Mar 3 13:50:19.917278 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 52316 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:19.919236 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:19.933731 systemd-logind[1542]: New session 2 of user core. Mar 3 13:50:19.947355 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 3 13:50:20.393518 sshd[1677]: Connection closed by 10.0.0.1 port 52316 Mar 3 13:50:20.401355 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:20.555734 systemd[1]: sshd@1-10.0.0.111:22-10.0.0.1:52316.service: Deactivated successfully. Mar 3 13:50:20.566731 systemd[1]: session-2.scope: Deactivated successfully. Mar 3 13:50:20.570739 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Mar 3 13:50:20.581212 systemd[1]: Started sshd@2-10.0.0.111:22-10.0.0.1:33190.service - OpenSSH per-connection server daemon (10.0.0.1:33190). Mar 3 13:50:20.597581 systemd-logind[1542]: Removed session 2. Mar 3 13:50:20.827575 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 33190 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:20.832759 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:20.853779 systemd-logind[1542]: New session 3 of user core. Mar 3 13:50:20.867787 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 3 13:50:21.162115 sshd[1686]: Connection closed by 10.0.0.1 port 33190 Mar 3 13:50:21.163181 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:21.178364 systemd[1]: sshd@2-10.0.0.111:22-10.0.0.1:33190.service: Deactivated successfully. Mar 3 13:50:21.187325 systemd[1]: session-3.scope: Deactivated successfully. Mar 3 13:50:21.193905 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Mar 3 13:50:21.201362 systemd-logind[1542]: Removed session 3. Mar 3 13:50:22.634013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:50:22.635530 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 3 13:50:22.636347 systemd[1]: Startup finished in 5.942s (kernel) + 16.649s (initrd) + 24.137s (userspace) = 46.729s. Mar 3 13:50:22.655109 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:50:26.422563 kubelet[1696]: E0303 13:50:26.417047 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:50:26.448685 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:50:26.450910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:50:26.454705 systemd[1]: kubelet.service: Consumed 11.748s CPU time, 259.5M memory peak. Mar 3 13:50:31.219357 systemd[1]: Started sshd@3-10.0.0.111:22-10.0.0.1:59058.service - OpenSSH per-connection server daemon (10.0.0.1:59058). Mar 3 13:50:31.317309 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 59058 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:31.319251 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:31.329707 systemd-logind[1542]: New session 4 of user core. Mar 3 13:50:31.339813 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 3 13:50:31.368143 sshd[1709]: Connection closed by 10.0.0.1 port 59058 Mar 3 13:50:31.368612 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:31.392937 systemd[1]: sshd@3-10.0.0.111:22-10.0.0.1:59058.service: Deactivated successfully. Mar 3 13:50:31.396874 systemd[1]: session-4.scope: Deactivated successfully. Mar 3 13:50:31.399142 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Mar 3 13:50:31.403258 systemd[1]: Started sshd@4-10.0.0.111:22-10.0.0.1:59062.service - OpenSSH per-connection server daemon (10.0.0.1:59062). Mar 3 13:50:31.405651 systemd-logind[1542]: Removed session 4. Mar 3 13:50:31.500753 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 59062 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:31.503052 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:31.516519 systemd-logind[1542]: New session 5 of user core. Mar 3 13:50:31.536930 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 3 13:50:31.572272 sshd[1718]: Connection closed by 10.0.0.1 port 59062 Mar 3 13:50:31.572624 sshd-session[1715]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:31.587878 systemd[1]: sshd@4-10.0.0.111:22-10.0.0.1:59062.service: Deactivated successfully. Mar 3 13:50:31.590996 systemd[1]: session-5.scope: Deactivated successfully. Mar 3 13:50:31.593195 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Mar 3 13:50:31.597044 systemd[1]: Started sshd@5-10.0.0.111:22-10.0.0.1:59076.service - OpenSSH per-connection server daemon (10.0.0.1:59076). Mar 3 13:50:31.602344 systemd-logind[1542]: Removed session 5. Mar 3 13:50:31.689065 sshd[1724]: Accepted publickey for core from 10.0.0.1 port 59076 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:31.691841 sshd-session[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:31.706340 systemd-logind[1542]: New session 6 of user core. Mar 3 13:50:31.722876 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 3 13:50:31.760021 sshd[1727]: Connection closed by 10.0.0.1 port 59076 Mar 3 13:50:31.757883 sshd-session[1724]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:31.777142 systemd[1]: sshd@5-10.0.0.111:22-10.0.0.1:59076.service: Deactivated successfully. Mar 3 13:50:31.781013 systemd[1]: session-6.scope: Deactivated successfully. Mar 3 13:50:31.787030 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Mar 3 13:50:31.795158 systemd[1]: Started sshd@6-10.0.0.111:22-10.0.0.1:59092.service - OpenSSH per-connection server daemon (10.0.0.1:59092). Mar 3 13:50:31.798724 systemd-logind[1542]: Removed session 6. Mar 3 13:50:31.866974 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 59092 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:31.870604 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:31.880209 systemd-logind[1542]: New session 7 of user core. Mar 3 13:50:31.889998 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 3 13:50:31.937279 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 3 13:50:31.938193 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:50:31.972875 sudo[1737]: pam_unix(sudo:session): session closed for user root Mar 3 13:50:31.977968 sshd[1736]: Connection closed by 10.0.0.1 port 59092 Mar 3 13:50:31.980777 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:32.001015 systemd[1]: sshd@6-10.0.0.111:22-10.0.0.1:59092.service: Deactivated successfully. Mar 3 13:50:32.004596 systemd[1]: session-7.scope: Deactivated successfully. Mar 3 13:50:32.007151 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Mar 3 13:50:32.013042 systemd[1]: Started sshd@7-10.0.0.111:22-10.0.0.1:59098.service - OpenSSH per-connection server daemon (10.0.0.1:59098). Mar 3 13:50:32.021602 systemd-logind[1542]: Removed session 7. Mar 3 13:50:32.114801 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 59098 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:32.116821 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:32.138477 systemd-logind[1542]: New session 8 of user core. Mar 3 13:50:32.146936 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 3 13:50:32.176005 sudo[1748]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 3 13:50:32.176806 sudo[1748]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:50:32.193018 sudo[1748]: pam_unix(sudo:session): session closed for user root Mar 3 13:50:32.206337 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 3 13:50:32.206934 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:50:32.232367 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 3 13:50:32.363496 augenrules[1770]: No rules Mar 3 13:50:32.366294 systemd[1]: audit-rules.service: Deactivated successfully. Mar 3 13:50:32.367795 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 3 13:50:32.369909 sudo[1747]: pam_unix(sudo:session): session closed for user root Mar 3 13:50:32.372843 sshd[1746]: Connection closed by 10.0.0.1 port 59098 Mar 3 13:50:32.373856 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Mar 3 13:50:32.389712 systemd[1]: sshd@7-10.0.0.111:22-10.0.0.1:59098.service: Deactivated successfully. Mar 3 13:50:32.393062 systemd[1]: session-8.scope: Deactivated successfully. Mar 3 13:50:32.395078 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Mar 3 13:50:32.400308 systemd[1]: Started sshd@8-10.0.0.111:22-10.0.0.1:59104.service - OpenSSH per-connection server daemon (10.0.0.1:59104). Mar 3 13:50:32.402590 systemd-logind[1542]: Removed session 8. Mar 3 13:50:32.494634 sshd[1779]: Accepted publickey for core from 10.0.0.1 port 59104 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:50:32.497057 sshd-session[1779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:50:32.508591 systemd-logind[1542]: New session 9 of user core. Mar 3 13:50:32.531182 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 3 13:50:32.563885 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 3 13:50:32.565029 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 3 13:50:33.252140 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 3 13:50:33.283200 (dockerd)[1804]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 3 13:50:36.632342 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 3 13:50:36.940719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:50:38.263278 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 5169452986 wd_nsec: 5169452657 Mar 3 13:50:37.796460 systemd-resolved[1452]: Clock change detected. Flushing caches. Mar 3 13:50:37.861704 systemd-journald[1192]: Time jumped backwards, rotating. Mar 3 13:50:45.209173 dockerd[1804]: time="2026-03-03T13:50:45.195694022Z" level=info msg="Starting up" Mar 3 13:50:45.275562 dockerd[1804]: time="2026-03-03T13:50:45.272270866Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 3 13:50:46.108502 dockerd[1804]: time="2026-03-03T13:50:46.078992377Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 3 13:50:46.968876 systemd[1]: var-lib-docker-metacopy\x2dcheck3692578616-merged.mount: Deactivated successfully. Mar 3 13:50:47.387746 dockerd[1804]: time="2026-03-03T13:50:47.387266419Z" level=info msg="Loading containers: start." Mar 3 13:50:47.606476 kernel: Initializing XFRM netlink socket Mar 3 13:50:51.434398 update_engine[1546]: I20260303 13:50:51.428395 1546 update_attempter.cc:509] Updating boot flags... Mar 3 13:50:55.964579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:50:56.027288 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:50:58.094371 kubelet[1934]: E0303 13:50:58.093497 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:50:58.139468 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:50:58.143020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:50:58.151663 systemd[1]: kubelet.service: Consumed 13.386s CPU time, 111.2M memory peak. Mar 3 13:50:59.522954 systemd-networkd[1451]: docker0: Link UP Mar 3 13:50:59.577422 dockerd[1804]: time="2026-03-03T13:50:59.574988090Z" level=info msg="Loading containers: done." Mar 3 13:50:59.856906 dockerd[1804]: time="2026-03-03T13:50:59.856491323Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 3 13:50:59.858629 dockerd[1804]: time="2026-03-03T13:50:59.858332120Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 3 13:50:59.858629 dockerd[1804]: time="2026-03-03T13:50:59.858571527Z" level=info msg="Initializing buildkit" Mar 3 13:51:00.328498 dockerd[1804]: time="2026-03-03T13:51:00.327947077Z" level=info msg="Completed buildkit initialization" Mar 3 13:51:00.443599 dockerd[1804]: time="2026-03-03T13:51:00.441535209Z" level=info msg="Daemon has completed initialization" Mar 3 13:51:00.451033 dockerd[1804]: time="2026-03-03T13:51:00.448051184Z" level=info msg="API listen on /run/docker.sock" Mar 3 13:51:00.482213 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 3 13:51:08.368300 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 3 13:51:08.406400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:51:11.214922 systemd-resolved[1452]: Clock change detected. Flushing caches. Mar 3 13:51:16.956596 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:51:17.035194 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:51:19.367103 containerd[1567]: time="2026-03-03T13:51:19.357643718Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 3 13:51:19.493768 kubelet[2064]: E0303 13:51:19.493127 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:51:19.507174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:51:19.509996 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:51:19.512630 systemd[1]: kubelet.service: Consumed 7.247s CPU time, 110.9M memory peak. Mar 3 13:51:22.532176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3077618579.mount: Deactivated successfully. Mar 3 13:51:29.717270 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 3 13:51:29.784986 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:51:34.535050 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:51:34.652685 (kubelet)[2142]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:51:37.543612 kubelet[2142]: E0303 13:51:37.539455 2142 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:51:37.561037 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:51:37.561951 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:51:37.566082 systemd[1]: kubelet.service: Consumed 4.691s CPU time, 109.3M memory peak. Mar 3 13:51:44.333123 systemd-resolved[1452]: Clock change detected. Flushing caches. Mar 3 13:51:48.574309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 3 13:51:48.596082 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:51:49.224744 containerd[1567]: time="2026-03-03T13:51:49.224230780Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 3 13:51:49.227064 containerd[1567]: time="2026-03-03T13:51:49.222844889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:51:49.240147 containerd[1567]: time="2026-03-03T13:51:49.240037076Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:51:49.261788 containerd[1567]: time="2026-03-03T13:51:49.261729527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:51:49.263207 containerd[1567]: time="2026-03-03T13:51:49.262867579Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 29.000257786s" Mar 3 13:51:49.263207 containerd[1567]: time="2026-03-03T13:51:49.263077752Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 3 13:51:49.272873 containerd[1567]: time="2026-03-03T13:51:49.272237916Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 3 13:51:50.818675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:51:50.892023 (kubelet)[2160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:51:51.679839 kubelet[2160]: E0303 13:51:51.677847 2160 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:51:51.687735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:51:51.688206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:51:51.689169 systemd[1]: kubelet.service: Consumed 1.975s CPU time, 110.4M memory peak. Mar 3 13:51:59.280163 containerd[1567]: time="2026-03-03T13:51:59.277399592Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:51:59.282867 containerd[1567]: time="2026-03-03T13:51:59.282770409Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 3 13:51:59.288288 containerd[1567]: time="2026-03-03T13:51:59.287650740Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:51:59.298776 containerd[1567]: time="2026-03-03T13:51:59.297182999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:51:59.298776 containerd[1567]: time="2026-03-03T13:51:59.298667548Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 10.026393866s" Mar 3 13:51:59.298776 containerd[1567]: time="2026-03-03T13:51:59.298708154Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 3 13:51:59.304221 containerd[1567]: time="2026-03-03T13:51:59.303661672Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 3 13:52:01.807039 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 3 13:52:01.833878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:02.465181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:02.498297 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:52:02.962033 kubelet[2184]: E0303 13:52:02.961059 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:52:02.971706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:52:02.972179 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:52:02.973224 systemd[1]: kubelet.service: Consumed 702ms CPU time, 110.6M memory peak. Mar 3 13:52:04.910420 containerd[1567]: time="2026-03-03T13:52:04.909303499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:04.915073 containerd[1567]: time="2026-03-03T13:52:04.914874429Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 3 13:52:04.917548 containerd[1567]: time="2026-03-03T13:52:04.917454248Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:04.927870 containerd[1567]: time="2026-03-03T13:52:04.927078897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:04.927870 containerd[1567]: time="2026-03-03T13:52:04.931125752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 5.626724289s" Mar 3 13:52:04.927870 containerd[1567]: time="2026-03-03T13:52:04.931170155Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 3 13:52:04.949839 containerd[1567]: time="2026-03-03T13:52:04.943335727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 3 13:52:08.975198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2161180400.mount: Deactivated successfully. Mar 3 13:52:11.889173 containerd[1567]: time="2026-03-03T13:52:11.888333793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:11.898094 containerd[1567]: time="2026-03-03T13:52:11.897369791Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 3 13:52:11.928609 containerd[1567]: time="2026-03-03T13:52:11.920760006Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:11.968469 containerd[1567]: time="2026-03-03T13:52:11.966712346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:11.969733 containerd[1567]: time="2026-03-03T13:52:11.969035251Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 7.024955376s" Mar 3 13:52:11.969733 containerd[1567]: time="2026-03-03T13:52:11.969148552Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 3 13:52:11.977780 containerd[1567]: time="2026-03-03T13:52:11.975721835Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 3 13:52:13.046389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 3 13:52:13.054317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:13.098293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2887174498.mount: Deactivated successfully. Mar 3 13:52:13.824421 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:13.869566 (kubelet)[2222]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:52:14.394240 kubelet[2222]: E0303 13:52:14.394089 2222 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:52:14.401494 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:52:14.401891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:52:14.405172 systemd[1]: kubelet.service: Consumed 756ms CPU time, 110.3M memory peak. Mar 3 13:52:21.300397 containerd[1567]: time="2026-03-03T13:52:21.300099186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:21.305621 containerd[1567]: time="2026-03-03T13:52:21.305203524Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 3 13:52:21.311418 containerd[1567]: time="2026-03-03T13:52:21.311273015Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:21.321307 containerd[1567]: time="2026-03-03T13:52:21.321189729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:21.323813 containerd[1567]: time="2026-03-03T13:52:21.323668708Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 9.347791663s" Mar 3 13:52:21.323813 containerd[1567]: time="2026-03-03T13:52:21.323748478Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 3 13:52:21.327438 containerd[1567]: time="2026-03-03T13:52:21.327404967Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 3 13:52:22.553340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312847417.mount: Deactivated successfully. Mar 3 13:52:22.584072 containerd[1567]: time="2026-03-03T13:52:22.583746316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:22.588339 containerd[1567]: time="2026-03-03T13:52:22.588064699Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 3 13:52:22.595678 containerd[1567]: time="2026-03-03T13:52:22.594786674Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:22.603864 containerd[1567]: time="2026-03-03T13:52:22.602126418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:22.603864 containerd[1567]: time="2026-03-03T13:52:22.602759010Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.275316302s" Mar 3 13:52:22.603864 containerd[1567]: time="2026-03-03T13:52:22.602795688Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 3 13:52:22.609101 containerd[1567]: time="2026-03-03T13:52:22.607759990Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 3 13:52:23.627288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2533522656.mount: Deactivated successfully. Mar 3 13:52:24.554416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 3 13:52:24.568121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:25.218478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:25.252761 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 3 13:52:25.545568 kubelet[2298]: E0303 13:52:25.544371 2298 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 3 13:52:25.555084 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 3 13:52:25.555404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 3 13:52:25.557331 systemd[1]: kubelet.service: Consumed 628ms CPU time, 110.7M memory peak. Mar 3 13:52:29.432778 containerd[1567]: time="2026-03-03T13:52:29.432308850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:29.436193 containerd[1567]: time="2026-03-03T13:52:29.436027175Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 3 13:52:29.445374 containerd[1567]: time="2026-03-03T13:52:29.443197165Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:29.452541 containerd[1567]: time="2026-03-03T13:52:29.452104194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:52:29.461511 containerd[1567]: time="2026-03-03T13:52:29.455132751Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 6.847331204s" Mar 3 13:52:29.461511 containerd[1567]: time="2026-03-03T13:52:29.455191341Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 3 13:52:34.511507 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:34.512020 systemd[1]: kubelet.service: Consumed 628ms CPU time, 110.7M memory peak. Mar 3 13:52:34.518481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:34.587527 systemd[1]: Reload requested from client PID 2385 ('systemctl') (unit session-9.scope)... Mar 3 13:52:34.588013 systemd[1]: Reloading... Mar 3 13:52:34.831805 zram_generator::config[2429]: No configuration found. Mar 3 13:52:35.384108 systemd[1]: Reloading finished in 793 ms. Mar 3 13:52:35.495148 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 3 13:52:35.495368 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 3 13:52:35.495851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:35.496000 systemd[1]: kubelet.service: Consumed 248ms CPU time, 98.3M memory peak. Mar 3 13:52:35.498773 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:35.976867 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:36.014054 (kubelet)[2475]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:52:36.407485 kubelet[2475]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:52:36.407485 kubelet[2475]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:52:36.409544 kubelet[2475]: I0303 13:52:36.407783 2475 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:52:36.778832 kubelet[2475]: I0303 13:52:36.778088 2475 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:52:36.778832 kubelet[2475]: I0303 13:52:36.778395 2475 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:52:36.778832 kubelet[2475]: I0303 13:52:36.778744 2475 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:52:36.778832 kubelet[2475]: I0303 13:52:36.778815 2475 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:52:36.781209 kubelet[2475]: I0303 13:52:36.779345 2475 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:52:36.812859 kubelet[2475]: I0303 13:52:36.812503 2475 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:52:36.819041 kubelet[2475]: E0303 13:52:36.818697 2475 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:52:36.838881 kubelet[2475]: I0303 13:52:36.838758 2475 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:52:36.867166 kubelet[2475]: I0303 13:52:36.866992 2475 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:52:36.870479 kubelet[2475]: I0303 13:52:36.870299 2475 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:52:36.871498 kubelet[2475]: I0303 13:52:36.870402 2475 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:52:36.872088 kubelet[2475]: I0303 13:52:36.871844 2475 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:52:36.872088 kubelet[2475]: I0303 13:52:36.871867 2475 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:52:36.872595 kubelet[2475]: I0303 13:52:36.872438 2475 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:52:36.882105 kubelet[2475]: I0303 13:52:36.881778 2475 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:52:36.883299 kubelet[2475]: I0303 13:52:36.883161 2475 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:52:36.883354 kubelet[2475]: I0303 13:52:36.883319 2475 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:52:36.884833 kubelet[2475]: I0303 13:52:36.884013 2475 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:52:36.884833 kubelet[2475]: I0303 13:52:36.884365 2475 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:52:36.886889 kubelet[2475]: E0303 13:52:36.886850 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:52:36.887127 kubelet[2475]: E0303 13:52:36.886853 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:52:36.891478 kubelet[2475]: I0303 13:52:36.891410 2475 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:52:36.894058 kubelet[2475]: I0303 13:52:36.893800 2475 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:52:36.894058 kubelet[2475]: I0303 13:52:36.893886 2475 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:52:36.895006 kubelet[2475]: W0303 13:52:36.894834 2475 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 3 13:52:36.913449 kubelet[2475]: I0303 13:52:36.912713 2475 server.go:1262] "Started kubelet" Mar 3 13:52:36.916694 kubelet[2475]: I0303 13:52:36.916358 2475 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:52:36.917090 kubelet[2475]: I0303 13:52:36.917067 2475 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:52:36.920458 kubelet[2475]: I0303 13:52:36.920264 2475 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:52:36.923505 kubelet[2475]: I0303 13:52:36.923199 2475 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:52:36.931137 kubelet[2475]: I0303 13:52:36.930841 2475 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:52:36.937223 kubelet[2475]: I0303 13:52:36.937143 2475 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:52:36.937802 kubelet[2475]: I0303 13:52:36.937421 2475 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:52:36.938470 kubelet[2475]: I0303 13:52:36.938248 2475 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:52:36.940536 kubelet[2475]: E0303 13:52:36.940389 2475 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:52:36.946773 kubelet[2475]: I0303 13:52:36.946400 2475 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:52:36.947603 kubelet[2475]: I0303 13:52:36.947261 2475 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:52:36.950856 kubelet[2475]: E0303 13:52:36.950733 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:52:36.952715 kubelet[2475]: E0303 13:52:36.952488 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="200ms" Mar 3 13:52:36.954609 kubelet[2475]: E0303 13:52:36.947329 2475 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189959280dd18d7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:52:36.912401786 +0000 UTC m=+0.886324669,LastTimestamp:2026-03-03 13:52:36.912401786 +0000 UTC m=+0.886324669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:52:36.958863 kubelet[2475]: I0303 13:52:36.958366 2475 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:52:36.959776 kubelet[2475]: I0303 13:52:36.959583 2475 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:52:36.977854 kubelet[2475]: I0303 13:52:36.977793 2475 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:52:37.026850 kubelet[2475]: E0303 13:52:37.026788 2475 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:52:37.041495 kubelet[2475]: E0303 13:52:37.041184 2475 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:52:37.070837 kubelet[2475]: I0303 13:52:37.070079 2475 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:52:37.070837 kubelet[2475]: I0303 13:52:37.070106 2475 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:52:37.070837 kubelet[2475]: I0303 13:52:37.070181 2475 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:52:37.077016 kubelet[2475]: I0303 13:52:37.076544 2475 policy_none.go:49] "None policy: Start" Mar 3 13:52:37.077271 kubelet[2475]: I0303 13:52:37.077164 2475 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:52:37.077427 kubelet[2475]: I0303 13:52:37.077348 2475 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:52:37.081347 kubelet[2475]: I0303 13:52:37.081205 2475 policy_none.go:47] "Start" Mar 3 13:52:37.099615 kubelet[2475]: I0303 13:52:37.099574 2475 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:52:37.117568 kubelet[2475]: I0303 13:52:37.117253 2475 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:52:37.117568 kubelet[2475]: I0303 13:52:37.117599 2475 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:52:37.118376 kubelet[2475]: I0303 13:52:37.118060 2475 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:52:37.119018 kubelet[2475]: E0303 13:52:37.118384 2475 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:52:37.124515 kubelet[2475]: E0303 13:52:37.124460 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:52:37.125559 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 3 13:52:37.145066 kubelet[2475]: E0303 13:52:37.142869 2475 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 3 13:52:37.154502 kubelet[2475]: E0303 13:52:37.154233 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="400ms" Mar 3 13:52:37.159744 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 3 13:52:37.169263 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 3 13:52:37.186580 kubelet[2475]: E0303 13:52:37.186253 2475 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:52:37.188540 kubelet[2475]: I0303 13:52:37.188338 2475 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:52:37.192398 kubelet[2475]: I0303 13:52:37.189274 2475 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:52:37.192398 kubelet[2475]: E0303 13:52:37.191360 2475 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:52:37.192398 kubelet[2475]: E0303 13:52:37.192209 2475 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 13:52:37.192562 kubelet[2475]: I0303 13:52:37.192544 2475 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:52:37.252351 kubelet[2475]: I0303 13:52:37.251356 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:37.252351 kubelet[2475]: I0303 13:52:37.252295 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:37.252351 kubelet[2475]: I0303 13:52:37.252330 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:37.254521 kubelet[2475]: I0303 13:52:37.254241 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:37.254521 kubelet[2475]: I0303 13:52:37.254337 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b639d37b8a4ae207b3884f7860ea11b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b639d37b8a4ae207b3884f7860ea11b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:37.254521 kubelet[2475]: I0303 13:52:37.254360 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:37.254521 kubelet[2475]: I0303 13:52:37.254382 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:37.255583 kubelet[2475]: I0303 13:52:37.254405 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b639d37b8a4ae207b3884f7860ea11b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b639d37b8a4ae207b3884f7860ea11b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:37.255583 kubelet[2475]: I0303 13:52:37.255459 2475 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b639d37b8a4ae207b3884f7860ea11b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b639d37b8a4ae207b3884f7860ea11b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:37.256523 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 3 13:52:37.285958 kubelet[2475]: E0303 13:52:37.285731 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:37.288485 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 3 13:52:37.296737 kubelet[2475]: I0303 13:52:37.296711 2475 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:37.300576 kubelet[2475]: E0303 13:52:37.299387 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 3 13:52:37.307461 kubelet[2475]: E0303 13:52:37.306454 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:37.307474 systemd[1]: Created slice kubepods-burstable-pod4b639d37b8a4ae207b3884f7860ea11b.slice - libcontainer container kubepods-burstable-pod4b639d37b8a4ae207b3884f7860ea11b.slice. Mar 3 13:52:37.317470 kubelet[2475]: E0303 13:52:37.316810 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:37.514562 kubelet[2475]: I0303 13:52:37.514048 2475 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:37.518084 kubelet[2475]: E0303 13:52:37.515292 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 3 13:52:37.557571 kubelet[2475]: E0303 13:52:37.557206 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="800ms" Mar 3 13:52:37.595877 kubelet[2475]: E0303 13:52:37.595692 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:37.618151 kubelet[2475]: E0303 13:52:37.616728 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:37.630745 kubelet[2475]: E0303 13:52:37.630492 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:37.637758 containerd[1567]: time="2026-03-03T13:52:37.637282134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b639d37b8a4ae207b3884f7860ea11b,Namespace:kube-system,Attempt:0,}" Mar 3 13:52:37.637758 containerd[1567]: time="2026-03-03T13:52:37.637327762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 3 13:52:37.637758 containerd[1567]: time="2026-03-03T13:52:37.637529165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 3 13:52:37.799056 kubelet[2475]: E0303 13:52:37.798825 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:52:37.934153 kubelet[2475]: E0303 13:52:37.933112 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:52:37.937501 kubelet[2475]: I0303 13:52:37.937264 2475 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:37.953803 kubelet[2475]: E0303 13:52:37.952512 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 3 13:52:37.992513 kubelet[2475]: E0303 13:52:37.992310 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.111:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 3 13:52:38.361121 kubelet[2475]: E0303 13:52:38.360451 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="1.6s" Mar 3 13:52:38.423490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883662693.mount: Deactivated successfully. Mar 3 13:52:38.459701 containerd[1567]: time="2026-03-03T13:52:38.459478694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:52:38.474197 containerd[1567]: time="2026-03-03T13:52:38.473572032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 3 13:52:38.481161 containerd[1567]: time="2026-03-03T13:52:38.481112293Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:52:38.496715 containerd[1567]: time="2026-03-03T13:52:38.495181145Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:52:38.502132 containerd[1567]: time="2026-03-03T13:52:38.501349263Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:52:38.513243 containerd[1567]: time="2026-03-03T13:52:38.511353519Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:52:38.519873 containerd[1567]: time="2026-03-03T13:52:38.519349312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 3 13:52:38.522613 containerd[1567]: time="2026-03-03T13:52:38.522549946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 3 13:52:38.524462 containerd[1567]: time="2026-03-03T13:52:38.524102826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 882.211145ms" Mar 3 13:52:38.529061 containerd[1567]: time="2026-03-03T13:52:38.527573231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 873.418089ms" Mar 3 13:52:38.536037 containerd[1567]: time="2026-03-03T13:52:38.534169358Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 883.870309ms" Mar 3 13:52:38.622718 kubelet[2475]: E0303 13:52:38.622367 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.111:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 3 13:52:38.799297 kubelet[2475]: I0303 13:52:38.798869 2475 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:38.801996 kubelet[2475]: E0303 13:52:38.800193 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 3 13:52:38.818072 containerd[1567]: time="2026-03-03T13:52:38.817218669Z" level=info msg="connecting to shim e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df" address="unix:///run/containerd/s/a59df9533d579515d51c60bf40dbd828f590223e7435a0aa6d0226e471874329" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:52:38.850604 containerd[1567]: time="2026-03-03T13:52:38.850154938Z" level=info msg="connecting to shim ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3" address="unix:///run/containerd/s/9094a3a3edbfb9fee96b620a1b668bec5c06ee43211ea8a24c07dca4f54e321a" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:52:38.861717 containerd[1567]: time="2026-03-03T13:52:38.861589945Z" level=info msg="connecting to shim ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1" address="unix:///run/containerd/s/3dfb459d16c2256a064a1c24716cfef5e146d238c5e2114b1256969c36502095" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:52:38.935751 kubelet[2475]: E0303 13:52:38.935201 2475 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.111:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 3 13:52:39.067617 systemd[1]: Started cri-containerd-e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df.scope - libcontainer container e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df. Mar 3 13:52:39.189765 systemd[1]: Started cri-containerd-ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1.scope - libcontainer container ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1. Mar 3 13:52:39.418194 systemd[1]: Started cri-containerd-ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3.scope - libcontainer container ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3. Mar 3 13:52:39.537726 containerd[1567]: time="2026-03-03T13:52:39.534024784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df\"" Mar 3 13:52:39.539211 kubelet[2475]: E0303 13:52:39.539175 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:39.561118 containerd[1567]: time="2026-03-03T13:52:39.560624241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b639d37b8a4ae207b3884f7860ea11b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1\"" Mar 3 13:52:39.562026 kubelet[2475]: E0303 13:52:39.561820 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:39.567023 containerd[1567]: time="2026-03-03T13:52:39.566569120Z" level=info msg="CreateContainer within sandbox \"e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 3 13:52:39.576569 containerd[1567]: time="2026-03-03T13:52:39.576462838Z" level=info msg="CreateContainer within sandbox \"ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 3 13:52:39.582024 kubelet[2475]: E0303 13:52:39.581312 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.111:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 3 13:52:39.633063 containerd[1567]: time="2026-03-03T13:52:39.631527980Z" level=info msg="Container 48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:52:39.670549 containerd[1567]: time="2026-03-03T13:52:39.670175188Z" level=info msg="CreateContainer within sandbox \"e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0\"" Mar 3 13:52:39.674475 containerd[1567]: time="2026-03-03T13:52:39.674387763Z" level=info msg="StartContainer for \"48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0\"" Mar 3 13:52:39.674475 containerd[1567]: time="2026-03-03T13:52:39.674423816Z" level=info msg="Container e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:52:39.684292 containerd[1567]: time="2026-03-03T13:52:39.684182768Z" level=info msg="connecting to shim 48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0" address="unix:///run/containerd/s/a59df9533d579515d51c60bf40dbd828f590223e7435a0aa6d0226e471874329" protocol=ttrpc version=3 Mar 3 13:52:39.705858 containerd[1567]: time="2026-03-03T13:52:39.705456212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3\"" Mar 3 13:52:39.707557 kubelet[2475]: E0303 13:52:39.707438 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:39.725023 containerd[1567]: time="2026-03-03T13:52:39.724831348Z" level=info msg="CreateContainer within sandbox \"ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 3 13:52:39.725753 containerd[1567]: time="2026-03-03T13:52:39.725410959Z" level=info msg="CreateContainer within sandbox \"ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782\"" Mar 3 13:52:39.727362 containerd[1567]: time="2026-03-03T13:52:39.727301806Z" level=info msg="StartContainer for \"e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782\"" Mar 3 13:52:39.730406 containerd[1567]: time="2026-03-03T13:52:39.730044821Z" level=info msg="connecting to shim e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782" address="unix:///run/containerd/s/3dfb459d16c2256a064a1c24716cfef5e146d238c5e2114b1256969c36502095" protocol=ttrpc version=3 Mar 3 13:52:39.743468 systemd[1]: Started cri-containerd-48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0.scope - libcontainer container 48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0. Mar 3 13:52:39.793243 kubelet[2475]: E0303 13:52:39.790412 2475 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.111:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.111:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 3 13:52:39.853104 containerd[1567]: time="2026-03-03T13:52:39.852524013Z" level=info msg="Container d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:52:39.881413 containerd[1567]: time="2026-03-03T13:52:39.879299947Z" level=info msg="CreateContainer within sandbox \"ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221\"" Mar 3 13:52:39.886175 containerd[1567]: time="2026-03-03T13:52:39.884210724Z" level=info msg="StartContainer for \"d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221\"" Mar 3 13:52:39.885237 systemd[1]: Started cri-containerd-e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782.scope - libcontainer container e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782. Mar 3 13:52:39.892262 containerd[1567]: time="2026-03-03T13:52:39.891799596Z" level=info msg="connecting to shim d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221" address="unix:///run/containerd/s/9094a3a3edbfb9fee96b620a1b668bec5c06ee43211ea8a24c07dca4f54e321a" protocol=ttrpc version=3 Mar 3 13:52:39.965254 systemd[1]: Started cri-containerd-d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221.scope - libcontainer container d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221. Mar 3 13:52:39.977604 kubelet[2475]: E0303 13:52:39.976399 2475 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.111:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.111:6443: connect: connection refused" interval="3.2s" Mar 3 13:52:40.100026 containerd[1567]: time="2026-03-03T13:52:40.099455271Z" level=info msg="StartContainer for \"e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782\" returns successfully" Mar 3 13:52:40.145174 kubelet[2475]: E0303 13:52:40.144758 2475 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.111:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.111:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189959280dd18d7a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-03 13:52:36.912401786 +0000 UTC m=+0.886324669,LastTimestamp:2026-03-03 13:52:36.912401786 +0000 UTC m=+0.886324669,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 3 13:52:40.183030 containerd[1567]: time="2026-03-03T13:52:40.182634163Z" level=info msg="StartContainer for \"48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0\" returns successfully" Mar 3 13:52:40.368290 kubelet[2475]: E0303 13:52:40.364286 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:40.368290 kubelet[2475]: E0303 13:52:40.364600 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:40.374414 kubelet[2475]: E0303 13:52:40.374271 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:40.376388 kubelet[2475]: E0303 13:52:40.374440 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:40.409039 kubelet[2475]: I0303 13:52:40.408133 2475 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:40.410142 kubelet[2475]: E0303 13:52:40.410036 2475 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.111:6443/api/v1/nodes\": dial tcp 10.0.0.111:6443: connect: connection refused" node="localhost" Mar 3 13:52:40.491821 containerd[1567]: time="2026-03-03T13:52:40.489196240Z" level=info msg="StartContainer for \"d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221\" returns successfully" Mar 3 13:52:41.416265 kubelet[2475]: E0303 13:52:41.415772 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:41.416265 kubelet[2475]: E0303 13:52:41.416203 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:41.420479 kubelet[2475]: E0303 13:52:41.420082 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:41.420479 kubelet[2475]: E0303 13:52:41.420207 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:42.451337 kubelet[2475]: E0303 13:52:42.450284 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:42.451337 kubelet[2475]: E0303 13:52:42.450564 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:43.448547 kubelet[2475]: E0303 13:52:43.444336 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:43.448547 kubelet[2475]: E0303 13:52:43.445104 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:43.619310 kubelet[2475]: I0303 13:52:43.618801 2475 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:43.714873 kubelet[2475]: E0303 13:52:43.714645 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:43.715354 kubelet[2475]: E0303 13:52:43.715229 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:44.386347 kubelet[2475]: E0303 13:52:44.384460 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:44.389824 kubelet[2475]: E0303 13:52:44.389585 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:44.633840 kubelet[2475]: E0303 13:52:44.633169 2475 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 3 13:52:44.648648 kubelet[2475]: E0303 13:52:44.635076 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:47.199817 kubelet[2475]: E0303 13:52:47.199318 2475 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 3 13:52:48.768306 kubelet[2475]: E0303 13:52:48.767623 2475 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 3 13:52:48.983453 kubelet[2475]: I0303 13:52:48.982781 2475 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:52:48.987062 kubelet[2475]: I0303 13:52:48.986045 2475 apiserver.go:52] "Watching apiserver" Mar 3 13:52:49.046173 kubelet[2475]: I0303 13:52:49.045395 2475 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:49.050762 kubelet[2475]: I0303 13:52:49.049655 2475 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:52:49.149615 kubelet[2475]: E0303 13:52:49.145374 2475 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:49.149615 kubelet[2475]: I0303 13:52:49.145415 2475 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:49.155616 kubelet[2475]: E0303 13:52:49.152533 2475 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:49.155616 kubelet[2475]: I0303 13:52:49.152563 2475 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:49.166692 kubelet[2475]: E0303 13:52:49.166382 2475 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:53.228651 kubelet[2475]: I0303 13:52:53.226094 2475 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:53.283371 kubelet[2475]: E0303 13:52:53.282820 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:53.714244 kubelet[2475]: E0303 13:52:53.713354 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:53.810371 kubelet[2475]: I0303 13:52:53.808091 2475 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:53.856560 kubelet[2475]: E0303 13:52:53.856455 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:53.940881 kubelet[2475]: I0303 13:52:53.940140 2475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.940044645 podStartE2EDuration="940.044645ms" podCreationTimestamp="2026-03-03 13:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:52:53.899563421 +0000 UTC m=+17.873486305" watchObservedRunningTime="2026-03-03 13:52:53.940044645 +0000 UTC m=+17.913967528" Mar 3 13:52:53.940881 kubelet[2475]: I0303 13:52:53.940287 2475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.940278472 podStartE2EDuration="940.278472ms" podCreationTimestamp="2026-03-03 13:52:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:52:53.933325386 +0000 UTC m=+17.907248269" watchObservedRunningTime="2026-03-03 13:52:53.940278472 +0000 UTC m=+17.914201354" Mar 3 13:52:54.574192 kubelet[2475]: I0303 13:52:54.573407 2475 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:54.620094 kubelet[2475]: E0303 13:52:54.619549 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:54.679116 systemd[1]: Reload requested from client PID 2770 ('systemctl') (unit session-9.scope)... Mar 3 13:52:54.679547 systemd[1]: Reloading... Mar 3 13:52:54.733855 kubelet[2475]: E0303 13:52:54.728390 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:54.733855 kubelet[2475]: E0303 13:52:54.730636 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:54.733855 kubelet[2475]: E0303 13:52:54.732603 2475 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:55.165645 zram_generator::config[2812]: No configuration found. Mar 3 13:52:55.767594 systemd[1]: Reloading finished in 1087 ms. Mar 3 13:52:55.955503 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:55.984650 systemd[1]: kubelet.service: Deactivated successfully. Mar 3 13:52:55.985716 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:55.985865 systemd[1]: kubelet.service: Consumed 4.101s CPU time, 130.1M memory peak. Mar 3 13:52:55.992232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 3 13:52:56.479065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 3 13:52:56.506459 (kubelet)[2857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 3 13:52:56.739709 kubelet[2857]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 3 13:52:56.739709 kubelet[2857]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 3 13:52:56.739709 kubelet[2857]: I0303 13:52:56.738691 2857 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 3 13:52:56.782711 kubelet[2857]: I0303 13:52:56.781280 2857 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 3 13:52:56.782711 kubelet[2857]: I0303 13:52:56.781324 2857 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 3 13:52:56.782711 kubelet[2857]: I0303 13:52:56.781369 2857 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 3 13:52:56.782711 kubelet[2857]: I0303 13:52:56.781387 2857 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 3 13:52:56.782711 kubelet[2857]: I0303 13:52:56.781694 2857 server.go:956] "Client rotation is on, will bootstrap in background" Mar 3 13:52:56.790047 kubelet[2857]: I0303 13:52:56.789589 2857 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 3 13:52:56.811883 kubelet[2857]: I0303 13:52:56.811693 2857 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 3 13:52:56.828464 kubelet[2857]: I0303 13:52:56.828040 2857 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 3 13:52:56.863418 kubelet[2857]: I0303 13:52:56.851606 2857 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 3 13:52:56.863418 kubelet[2857]: I0303 13:52:56.852662 2857 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 3 13:52:56.863418 kubelet[2857]: I0303 13:52:56.853208 2857 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 3 13:52:56.863418 kubelet[2857]: I0303 13:52:56.854322 2857 topology_manager.go:138] "Creating topology manager with none policy" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.854335 2857 container_manager_linux.go:306] "Creating device plugin manager" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.854365 2857 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.854576 2857 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.854884 2857 kubelet.go:475] "Attempting to sync node with API server" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.856331 2857 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.856432 2857 kubelet.go:387] "Adding apiserver pod source" Mar 3 13:52:56.863822 kubelet[2857]: I0303 13:52:56.856449 2857 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 3 13:52:56.890624 kubelet[2857]: I0303 13:52:56.882237 2857 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 3 13:52:56.894019 kubelet[2857]: I0303 13:52:56.893294 2857 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 3 13:52:56.894019 kubelet[2857]: I0303 13:52:56.893340 2857 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 3 13:52:56.967556 kubelet[2857]: I0303 13:52:56.966724 2857 server.go:1262] "Started kubelet" Mar 3 13:52:56.969530 kubelet[2857]: I0303 13:52:56.968505 2857 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 3 13:52:56.969530 kubelet[2857]: I0303 13:52:56.968574 2857 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 3 13:52:56.971352 kubelet[2857]: I0303 13:52:56.969670 2857 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 3 13:52:56.973107 kubelet[2857]: I0303 13:52:56.972730 2857 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 3 13:52:56.982029 kubelet[2857]: I0303 13:52:56.980237 2857 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 3 13:52:56.982207 kubelet[2857]: E0303 13:52:56.981836 2857 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 3 13:52:56.982263 kubelet[2857]: I0303 13:52:56.982220 2857 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 3 13:52:56.982469 kubelet[2857]: I0303 13:52:56.982377 2857 reconciler.go:29] "Reconciler: start to sync state" Mar 3 13:52:56.983824 kubelet[2857]: I0303 13:52:56.983713 2857 server.go:310] "Adding debug handlers to kubelet server" Mar 3 13:52:56.987031 kubelet[2857]: I0303 13:52:56.986693 2857 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 3 13:52:56.993654 kubelet[2857]: I0303 13:52:56.990553 2857 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 3 13:52:57.035035 kubelet[2857]: I0303 13:52:57.034143 2857 factory.go:223] Registration of the containerd container factory successfully Mar 3 13:52:57.035035 kubelet[2857]: I0303 13:52:57.034246 2857 factory.go:223] Registration of the systemd container factory successfully Mar 3 13:52:57.035035 kubelet[2857]: I0303 13:52:57.034494 2857 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 3 13:52:57.247539 kubelet[2857]: I0303 13:52:57.244575 2857 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 3 13:52:57.273082 kubelet[2857]: I0303 13:52:57.272525 2857 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 3 13:52:57.273082 kubelet[2857]: I0303 13:52:57.272605 2857 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 3 13:52:57.273082 kubelet[2857]: I0303 13:52:57.272708 2857 kubelet.go:2428] "Starting kubelet main sync loop" Mar 3 13:52:57.274468 kubelet[2857]: E0303 13:52:57.274389 2857 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 3 13:52:57.374730 kubelet[2857]: E0303 13:52:57.374676 2857 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.408565 2857 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.408587 2857 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.408614 2857 state_mem.go:36] "Initialized new in-memory state store" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.408886 2857 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.409044 2857 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.409067 2857 policy_none.go:49] "None policy: Start" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.409079 2857 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.409095 2857 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.409215 2857 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 3 13:52:57.410077 kubelet[2857]: I0303 13:52:57.409228 2857 policy_none.go:47] "Start" Mar 3 13:52:57.443976 kubelet[2857]: E0303 13:52:57.443490 2857 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 3 13:52:57.443976 kubelet[2857]: I0303 13:52:57.443881 2857 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 3 13:52:57.444407 kubelet[2857]: I0303 13:52:57.444092 2857 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 3 13:52:57.447069 kubelet[2857]: I0303 13:52:57.445482 2857 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 3 13:52:57.456290 kubelet[2857]: E0303 13:52:57.456039 2857 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 3 13:52:57.585670 kubelet[2857]: I0303 13:52:57.585417 2857 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:57.588822 kubelet[2857]: I0303 13:52:57.588250 2857 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:57.589416 kubelet[2857]: I0303 13:52:57.588663 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b639d37b8a4ae207b3884f7860ea11b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b639d37b8a4ae207b3884f7860ea11b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:57.589484 kubelet[2857]: I0303 13:52:57.589451 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b639d37b8a4ae207b3884f7860ea11b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b639d37b8a4ae207b3884f7860ea11b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:57.590072 kubelet[2857]: I0303 13:52:57.588665 2857 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.590277 kubelet[2857]: I0303 13:52:57.590065 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b639d37b8a4ae207b3884f7860ea11b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b639d37b8a4ae207b3884f7860ea11b\") " pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:57.607537 kubelet[2857]: I0303 13:52:57.607501 2857 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 3 13:52:57.641866 kubelet[2857]: E0303 13:52:57.641436 2857 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.665129 kubelet[2857]: E0303 13:52:57.664884 2857 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:57.665129 kubelet[2857]: E0303 13:52:57.665122 2857 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 3 13:52:57.742393 kubelet[2857]: I0303 13:52:57.712342 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.742393 kubelet[2857]: I0303 13:52:57.712606 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.742393 kubelet[2857]: I0303 13:52:57.712648 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.742393 kubelet[2857]: I0303 13:52:57.712701 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 3 13:52:57.742393 kubelet[2857]: I0303 13:52:57.712846 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.772401 kubelet[2857]: I0303 13:52:57.712889 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 3 13:52:57.866154 kubelet[2857]: I0303 13:52:57.861488 2857 apiserver.go:52] "Watching apiserver" Mar 3 13:52:57.871617 kubelet[2857]: I0303 13:52:57.870329 2857 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 3 13:52:57.871617 kubelet[2857]: I0303 13:52:57.870594 2857 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 3 13:52:57.943535 kubelet[2857]: E0303 13:52:57.942555 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:57.974036 kubelet[2857]: E0303 13:52:57.973123 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:57.974036 kubelet[2857]: E0303 13:52:57.973395 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:57.988463 kubelet[2857]: I0303 13:52:57.983621 2857 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 3 13:52:58.098395 kubelet[2857]: I0303 13:52:58.098129 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.098103808 podStartE2EDuration="4.098103808s" podCreationTimestamp="2026-03-03 13:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:52:58.058828723 +0000 UTC m=+1.471436967" watchObservedRunningTime="2026-03-03 13:52:58.098103808 +0000 UTC m=+1.510712051" Mar 3 13:52:58.328549 kubelet[2857]: E0303 13:52:58.327591 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:58.335205 kubelet[2857]: E0303 13:52:58.331537 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:58.335205 kubelet[2857]: E0303 13:52:58.333605 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:59.342266 kubelet[2857]: E0303 13:52:59.341160 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:59.348134 kubelet[2857]: E0303 13:52:59.345482 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:52:59.482162 kubelet[2857]: I0303 13:52:59.481406 2857 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 3 13:52:59.483493 containerd[1567]: time="2026-03-03T13:52:59.483419585Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 3 13:52:59.485214 kubelet[2857]: I0303 13:52:59.485173 2857 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 3 13:53:00.139177 systemd[1]: Created slice kubepods-besteffort-pod2e62cf15_defa_4210_ac0e_f08f82863d8d.slice - libcontainer container kubepods-besteffort-pod2e62cf15_defa_4210_ac0e_f08f82863d8d.slice. Mar 3 13:53:00.215495 kubelet[2857]: I0303 13:53:00.215055 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e62cf15-defa-4210-ac0e-f08f82863d8d-kube-proxy\") pod \"kube-proxy-kjtzl\" (UID: \"2e62cf15-defa-4210-ac0e-f08f82863d8d\") " pod="kube-system/kube-proxy-kjtzl" Mar 3 13:53:00.215495 kubelet[2857]: I0303 13:53:00.215182 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e62cf15-defa-4210-ac0e-f08f82863d8d-xtables-lock\") pod \"kube-proxy-kjtzl\" (UID: \"2e62cf15-defa-4210-ac0e-f08f82863d8d\") " pod="kube-system/kube-proxy-kjtzl" Mar 3 13:53:00.215495 kubelet[2857]: I0303 13:53:00.215211 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e62cf15-defa-4210-ac0e-f08f82863d8d-lib-modules\") pod \"kube-proxy-kjtzl\" (UID: \"2e62cf15-defa-4210-ac0e-f08f82863d8d\") " pod="kube-system/kube-proxy-kjtzl" Mar 3 13:53:00.215495 kubelet[2857]: I0303 13:53:00.215242 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq7vn\" (UniqueName: \"kubernetes.io/projected/2e62cf15-defa-4210-ac0e-f08f82863d8d-kube-api-access-pq7vn\") pod \"kube-proxy-kjtzl\" (UID: \"2e62cf15-defa-4210-ac0e-f08f82863d8d\") " pod="kube-system/kube-proxy-kjtzl" Mar 3 13:53:00.600108 kubelet[2857]: E0303 13:53:00.570760 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:00.612033 containerd[1567]: time="2026-03-03T13:53:00.610359077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjtzl,Uid:2e62cf15-defa-4210-ac0e-f08f82863d8d,Namespace:kube-system,Attempt:0,}" Mar 3 13:53:00.809085 containerd[1567]: time="2026-03-03T13:53:00.803185404Z" level=info msg="connecting to shim 2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c" address="unix:///run/containerd/s/5b8664249d94948458330cfaf9631cbe0c24a40ee28592502832e667db3c1899" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:53:01.160343 systemd[1]: Started cri-containerd-2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c.scope - libcontainer container 2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c. Mar 3 13:53:01.225186 systemd[1]: Created slice kubepods-besteffort-podea3386c6_62d2_451a_8c97_6cda4b370ec3.slice - libcontainer container kubepods-besteffort-podea3386c6_62d2_451a_8c97_6cda4b370ec3.slice. Mar 3 13:53:01.289557 kubelet[2857]: I0303 13:53:01.289343 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nxs\" (UniqueName: \"kubernetes.io/projected/ea3386c6-62d2-451a-8c97-6cda4b370ec3-kube-api-access-g6nxs\") pod \"tigera-operator-5588576f44-nshbb\" (UID: \"ea3386c6-62d2-451a-8c97-6cda4b370ec3\") " pod="tigera-operator/tigera-operator-5588576f44-nshbb" Mar 3 13:53:01.289557 kubelet[2857]: I0303 13:53:01.289392 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea3386c6-62d2-451a-8c97-6cda4b370ec3-var-lib-calico\") pod \"tigera-operator-5588576f44-nshbb\" (UID: \"ea3386c6-62d2-451a-8c97-6cda4b370ec3\") " pod="tigera-operator/tigera-operator-5588576f44-nshbb" Mar 3 13:53:01.420556 containerd[1567]: time="2026-03-03T13:53:01.420337816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjtzl,Uid:2e62cf15-defa-4210-ac0e-f08f82863d8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c\"" Mar 3 13:53:01.425231 kubelet[2857]: E0303 13:53:01.423156 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:01.459239 containerd[1567]: time="2026-03-03T13:53:01.459195734Z" level=info msg="CreateContainer within sandbox \"2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 3 13:53:01.614723 containerd[1567]: time="2026-03-03T13:53:01.600618062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-nshbb,Uid:ea3386c6-62d2-451a-8c97-6cda4b370ec3,Namespace:tigera-operator,Attempt:0,}" Mar 3 13:53:01.781303 containerd[1567]: time="2026-03-03T13:53:01.780586388Z" level=info msg="Container 07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:01.783270 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3395265053.mount: Deactivated successfully. Mar 3 13:53:01.840064 containerd[1567]: time="2026-03-03T13:53:01.840015437Z" level=info msg="CreateContainer within sandbox \"2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00\"" Mar 3 13:53:01.845516 containerd[1567]: time="2026-03-03T13:53:01.845217809Z" level=info msg="connecting to shim a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3" address="unix:///run/containerd/s/387e431efb74938ff0a683a992cee2bb520b90b3623440cc4049bcbadd43c138" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:53:01.846500 containerd[1567]: time="2026-03-03T13:53:01.846466630Z" level=info msg="StartContainer for \"07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00\"" Mar 3 13:53:01.865213 containerd[1567]: time="2026-03-03T13:53:01.865174235Z" level=info msg="connecting to shim 07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00" address="unix:///run/containerd/s/5b8664249d94948458330cfaf9631cbe0c24a40ee28592502832e667db3c1899" protocol=ttrpc version=3 Mar 3 13:53:01.965243 systemd[1]: Started cri-containerd-07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00.scope - libcontainer container 07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00. Mar 3 13:53:02.026533 systemd[1]: Started cri-containerd-a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3.scope - libcontainer container a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3. Mar 3 13:53:02.625201 containerd[1567]: time="2026-03-03T13:53:02.624037721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-nshbb,Uid:ea3386c6-62d2-451a-8c97-6cda4b370ec3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3\"" Mar 3 13:53:02.632565 containerd[1567]: time="2026-03-03T13:53:02.632530189Z" level=info msg="StartContainer for \"07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00\" returns successfully" Mar 3 13:53:02.639746 containerd[1567]: time="2026-03-03T13:53:02.639672206Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 3 13:53:03.402585 kubelet[2857]: E0303 13:53:03.401661 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:04.279778 kubelet[2857]: E0303 13:53:04.279105 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:04.348025 kubelet[2857]: I0303 13:53:04.347263 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjtzl" podStartSLOduration=4.347240256 podStartE2EDuration="4.347240256s" podCreationTimestamp="2026-03-03 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:53:03.460561336 +0000 UTC m=+6.873169579" watchObservedRunningTime="2026-03-03 13:53:04.347240256 +0000 UTC m=+7.759848499" Mar 3 13:53:04.366669 kubelet[2857]: E0303 13:53:04.365511 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:04.422013 kubelet[2857]: E0303 13:53:04.421705 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:04.428315 kubelet[2857]: E0303 13:53:04.426768 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:04.443122 kubelet[2857]: E0303 13:53:04.441495 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:04.697104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198851163.mount: Deactivated successfully. Mar 3 13:53:06.961687 kubelet[2857]: E0303 13:53:06.961277 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:08.139209 kubelet[2857]: E0303 13:53:08.138687 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:14.605434 containerd[1567]: time="2026-03-03T13:53:14.603203538Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:53:14.620697 containerd[1567]: time="2026-03-03T13:53:14.620264143Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 3 13:53:14.627070 containerd[1567]: time="2026-03-03T13:53:14.626508501Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:53:14.633593 containerd[1567]: time="2026-03-03T13:53:14.632539453Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:53:14.637501 containerd[1567]: time="2026-03-03T13:53:14.635599045Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 11.995880243s" Mar 3 13:53:14.637501 containerd[1567]: time="2026-03-03T13:53:14.635698280Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 3 13:53:14.656685 containerd[1567]: time="2026-03-03T13:53:14.656266988Z" level=info msg="CreateContainer within sandbox \"a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 3 13:53:14.713301 containerd[1567]: time="2026-03-03T13:53:14.711377110Z" level=info msg="Container 29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:14.748568 containerd[1567]: time="2026-03-03T13:53:14.748026371Z" level=info msg="CreateContainer within sandbox \"a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf\"" Mar 3 13:53:14.750687 containerd[1567]: time="2026-03-03T13:53:14.750643158Z" level=info msg="StartContainer for \"29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf\"" Mar 3 13:53:14.753036 containerd[1567]: time="2026-03-03T13:53:14.752551200Z" level=info msg="connecting to shim 29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf" address="unix:///run/containerd/s/387e431efb74938ff0a683a992cee2bb520b90b3623440cc4049bcbadd43c138" protocol=ttrpc version=3 Mar 3 13:53:14.866787 systemd[1]: Started cri-containerd-29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf.scope - libcontainer container 29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf. Mar 3 13:53:15.197187 containerd[1567]: time="2026-03-03T13:53:15.196279218Z" level=info msg="StartContainer for \"29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf\" returns successfully" Mar 3 13:53:19.617770 systemd[1]: cri-containerd-29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf.scope: Deactivated successfully. Mar 3 13:53:19.618431 systemd[1]: cri-containerd-29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf.scope: Consumed 1.103s CPU time, 41.5M memory peak. Mar 3 13:53:19.630648 containerd[1567]: time="2026-03-03T13:53:19.630449276Z" level=info msg="received container exit event container_id:\"29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf\" id:\"29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf\" pid:3207 exit_status:1 exited_at:{seconds:1772545999 nanos:629047165}" Mar 3 13:53:19.890310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf-rootfs.mount: Deactivated successfully. Mar 3 13:53:20.273175 kubelet[2857]: I0303 13:53:20.272187 2857 scope.go:117] "RemoveContainer" containerID="29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf" Mar 3 13:53:20.300182 containerd[1567]: time="2026-03-03T13:53:20.299710591Z" level=info msg="CreateContainer within sandbox \"a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Mar 3 13:53:20.376057 containerd[1567]: time="2026-03-03T13:53:20.373594749Z" level=info msg="Container 9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:20.385807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2886547949.mount: Deactivated successfully. Mar 3 13:53:20.412775 containerd[1567]: time="2026-03-03T13:53:20.412651251Z" level=info msg="CreateContainer within sandbox \"a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d\"" Mar 3 13:53:20.414317 containerd[1567]: time="2026-03-03T13:53:20.414213558Z" level=info msg="StartContainer for \"9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d\"" Mar 3 13:53:20.416067 containerd[1567]: time="2026-03-03T13:53:20.415676739Z" level=info msg="connecting to shim 9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d" address="unix:///run/containerd/s/387e431efb74938ff0a683a992cee2bb520b90b3623440cc4049bcbadd43c138" protocol=ttrpc version=3 Mar 3 13:53:20.541597 systemd[1]: Started cri-containerd-9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d.scope - libcontainer container 9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d. Mar 3 13:53:20.814705 containerd[1567]: time="2026-03-03T13:53:20.814629202Z" level=info msg="StartContainer for \"9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d\" returns successfully" Mar 3 13:53:21.358552 kubelet[2857]: I0303 13:53:21.358432 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-nshbb" podStartSLOduration=8.354475136 podStartE2EDuration="20.358409978s" podCreationTimestamp="2026-03-03 13:53:01 +0000 UTC" firstStartedPulling="2026-03-03 13:53:02.638519814 +0000 UTC m=+6.051128057" lastFinishedPulling="2026-03-03 13:53:14.642454656 +0000 UTC m=+18.055062899" observedRunningTime="2026-03-03 13:53:15.259047901 +0000 UTC m=+18.671656154" watchObservedRunningTime="2026-03-03 13:53:21.358409978 +0000 UTC m=+24.771018220" Mar 3 13:53:22.741371 sudo[1783]: pam_unix(sudo:session): session closed for user root Mar 3 13:53:22.754023 sshd[1782]: Connection closed by 10.0.0.1 port 59104 Mar 3 13:53:22.763145 sshd-session[1779]: pam_unix(sshd:session): session closed for user core Mar 3 13:53:22.785393 systemd[1]: sshd@8-10.0.0.111:22-10.0.0.1:59104.service: Deactivated successfully. Mar 3 13:53:22.810164 systemd[1]: session-9.scope: Deactivated successfully. Mar 3 13:53:22.810735 systemd[1]: session-9.scope: Consumed 24.442s CPU time, 236.6M memory peak. Mar 3 13:53:22.820296 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Mar 3 13:53:22.830500 systemd-logind[1542]: Removed session 9. Mar 3 13:53:30.446090 kubelet[2857]: E0303 13:53:30.445527 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.154s" Mar 3 13:53:49.367357 kubelet[2857]: E0303 13:53:49.365436 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="17.86s" Mar 3 13:53:49.396640 systemd[1]: cri-containerd-48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0.scope: Deactivated successfully. Mar 3 13:53:49.399835 systemd[1]: cri-containerd-48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0.scope: Consumed 9.589s CPU time, 49.3M memory peak. Mar 3 13:53:49.485888 containerd[1567]: time="2026-03-03T13:53:49.485448667Z" level=info msg="received container exit event container_id:\"48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0\" id:\"48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0\" pid:2674 exit_status:1 exited_at:{seconds:1772546029 nanos:480557816}" Mar 3 13:53:49.702310 systemd[1]: cri-containerd-d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221.scope: Deactivated successfully. Mar 3 13:53:49.719250 systemd[1]: cri-containerd-d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221.scope: Consumed 5.356s CPU time, 19.1M memory peak. Mar 3 13:53:49.773322 containerd[1567]: time="2026-03-03T13:53:49.773270718Z" level=info msg="received container exit event container_id:\"d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221\" id:\"d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221\" pid:2715 exit_status:1 exited_at:{seconds:1772546029 nanos:771268245}" Mar 3 13:53:50.951831 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0-rootfs.mount: Deactivated successfully. Mar 3 13:53:51.069160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221-rootfs.mount: Deactivated successfully. Mar 3 13:53:51.609276 kubelet[2857]: I0303 13:53:51.590717 2857 scope.go:117] "RemoveContainer" containerID="d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221" Mar 3 13:53:51.609276 kubelet[2857]: E0303 13:53:51.616384 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:51.754322 kubelet[2857]: I0303 13:53:51.753048 2857 scope.go:117] "RemoveContainer" containerID="48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0" Mar 3 13:53:51.755780 containerd[1567]: time="2026-03-03T13:53:51.755534535Z" level=info msg="CreateContainer within sandbox \"ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Mar 3 13:53:51.776282 kubelet[2857]: E0303 13:53:51.774990 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:51.786254 containerd[1567]: time="2026-03-03T13:53:51.786158945Z" level=info msg="CreateContainer within sandbox \"e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Mar 3 13:53:51.890417 containerd[1567]: time="2026-03-03T13:53:51.888190193Z" level=info msg="Container cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:51.900475 containerd[1567]: time="2026-03-03T13:53:51.900433329Z" level=info msg="Container f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:53:51.903855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount119133978.mount: Deactivated successfully. Mar 3 13:53:51.924462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2423287125.mount: Deactivated successfully. Mar 3 13:53:51.973364 containerd[1567]: time="2026-03-03T13:53:51.973152753Z" level=info msg="CreateContainer within sandbox \"ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b\"" Mar 3 13:53:52.027300 containerd[1567]: time="2026-03-03T13:53:52.026865204Z" level=info msg="StartContainer for \"f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b\"" Mar 3 13:53:52.048843 containerd[1567]: time="2026-03-03T13:53:52.048455129Z" level=info msg="CreateContainer within sandbox \"e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0\"" Mar 3 13:53:52.061407 containerd[1567]: time="2026-03-03T13:53:52.061097888Z" level=info msg="connecting to shim f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b" address="unix:///run/containerd/s/9094a3a3edbfb9fee96b620a1b668bec5c06ee43211ea8a24c07dca4f54e321a" protocol=ttrpc version=3 Mar 3 13:53:52.339727 containerd[1567]: time="2026-03-03T13:53:52.337022043Z" level=info msg="StartContainer for \"cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0\"" Mar 3 13:53:52.374710 containerd[1567]: time="2026-03-03T13:53:52.374435451Z" level=info msg="connecting to shim cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0" address="unix:///run/containerd/s/a59df9533d579515d51c60bf40dbd828f590223e7435a0aa6d0226e471874329" protocol=ttrpc version=3 Mar 3 13:53:52.994192 systemd[1]: Started cri-containerd-f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b.scope - libcontainer container f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b. Mar 3 13:53:53.320123 systemd[1]: Started cri-containerd-cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0.scope - libcontainer container cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0. Mar 3 13:53:53.431961 containerd[1567]: time="2026-03-03T13:53:53.431648032Z" level=info msg="StartContainer for \"f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b\" returns successfully" Mar 3 13:53:53.549432 containerd[1567]: time="2026-03-03T13:53:53.549306423Z" level=info msg="StartContainer for \"cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0\" returns successfully" Mar 3 13:53:54.172188 kubelet[2857]: E0303 13:53:54.172103 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:54.179712 kubelet[2857]: E0303 13:53:54.179618 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:55.345365 kubelet[2857]: E0303 13:53:55.345102 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:55.375628 kubelet[2857]: E0303 13:53:55.347400 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:53:56.301221 kubelet[2857]: E0303 13:53:56.300284 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:04.402515 kubelet[2857]: E0303 13:54:04.402276 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:04.594414 kubelet[2857]: E0303 13:54:04.594235 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:05.063863 kubelet[2857]: E0303 13:54:05.063423 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:09.585821 kubelet[2857]: E0303 13:54:09.585297 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:11.276127 kubelet[2857]: E0303 13:54:11.275233 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:15.660442 kubelet[2857]: E0303 13:54:15.660295 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:15.751409 kubelet[2857]: I0303 13:54:15.750764 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92vzp\" (UniqueName: \"kubernetes.io/projected/deebe8a3-257b-493a-a62d-87fcddfaf3ce-kube-api-access-92vzp\") pod \"csi-node-driver-bg6h7\" (UID: \"deebe8a3-257b-493a-a62d-87fcddfaf3ce\") " pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:15.751409 kubelet[2857]: I0303 13:54:15.750882 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/deebe8a3-257b-493a-a62d-87fcddfaf3ce-kubelet-dir\") pod \"csi-node-driver-bg6h7\" (UID: \"deebe8a3-257b-493a-a62d-87fcddfaf3ce\") " pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:15.751409 kubelet[2857]: I0303 13:54:15.750971 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/deebe8a3-257b-493a-a62d-87fcddfaf3ce-registration-dir\") pod \"csi-node-driver-bg6h7\" (UID: \"deebe8a3-257b-493a-a62d-87fcddfaf3ce\") " pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:15.751409 kubelet[2857]: I0303 13:54:15.750995 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/deebe8a3-257b-493a-a62d-87fcddfaf3ce-socket-dir\") pod \"csi-node-driver-bg6h7\" (UID: \"deebe8a3-257b-493a-a62d-87fcddfaf3ce\") " pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:15.751409 kubelet[2857]: I0303 13:54:15.751020 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/deebe8a3-257b-493a-a62d-87fcddfaf3ce-varrun\") pod \"csi-node-driver-bg6h7\" (UID: \"deebe8a3-257b-493a-a62d-87fcddfaf3ce\") " pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:15.793819 systemd[1]: Created slice kubepods-besteffort-pod0aa0f926_854c_4bdb_b827_2250299b1f3e.slice - libcontainer container kubepods-besteffort-pod0aa0f926_854c_4bdb_b827_2250299b1f3e.slice. Mar 3 13:54:15.851974 kubelet[2857]: I0303 13:54:15.851636 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwk58\" (UniqueName: \"kubernetes.io/projected/0aa0f926-854c-4bdb-b827-2250299b1f3e-kube-api-access-zwk58\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.851974 kubelet[2857]: I0303 13:54:15.851742 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-lib-modules\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.851974 kubelet[2857]: I0303 13:54:15.851792 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-policysync\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.851974 kubelet[2857]: I0303 13:54:15.851820 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aa0f926-854c-4bdb-b827-2250299b1f3e-tigera-ca-bundle\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.851974 kubelet[2857]: I0303 13:54:15.851841 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-xtables-lock\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.852715 kubelet[2857]: I0303 13:54:15.851867 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-cni-log-dir\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854096 kubelet[2857]: I0303 13:54:15.853978 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-cni-net-dir\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854096 kubelet[2857]: I0303 13:54:15.854033 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-sys-fs\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854096 kubelet[2857]: I0303 13:54:15.854059 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0aa0f926-854c-4bdb-b827-2250299b1f3e-node-certs\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854228 kubelet[2857]: I0303 13:54:15.854151 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-bpffs\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854228 kubelet[2857]: I0303 13:54:15.854190 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-cni-bin-dir\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854228 kubelet[2857]: I0303 13:54:15.854217 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-flexvol-driver-host\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854364 kubelet[2857]: I0303 13:54:15.854245 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-var-run-calico\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854364 kubelet[2857]: I0303 13:54:15.854294 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-nodeproc\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.854364 kubelet[2857]: I0303 13:54:15.854316 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0aa0f926-854c-4bdb-b827-2250299b1f3e-var-lib-calico\") pod \"calico-node-cswvm\" (UID: \"0aa0f926-854c-4bdb-b827-2250299b1f3e\") " pod="calico-system/calico-node-cswvm" Mar 3 13:54:15.933405 systemd[1]: Created slice kubepods-besteffort-pod8541335d_c9c2_49a2_a9c6_37415353ffad.slice - libcontainer container kubepods-besteffort-pod8541335d_c9c2_49a2_a9c6_37415353ffad.slice. Mar 3 13:54:15.956308 kubelet[2857]: I0303 13:54:15.956163 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8541335d-c9c2-49a2-a9c6-37415353ffad-tigera-ca-bundle\") pod \"calico-typha-5864dcdc95-l6mc7\" (UID: \"8541335d-c9c2-49a2-a9c6-37415353ffad\") " pod="calico-system/calico-typha-5864dcdc95-l6mc7" Mar 3 13:54:15.956308 kubelet[2857]: I0303 13:54:15.956241 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8541335d-c9c2-49a2-a9c6-37415353ffad-typha-certs\") pod \"calico-typha-5864dcdc95-l6mc7\" (UID: \"8541335d-c9c2-49a2-a9c6-37415353ffad\") " pod="calico-system/calico-typha-5864dcdc95-l6mc7" Mar 3 13:54:15.956476 kubelet[2857]: I0303 13:54:15.956325 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwgdg\" (UniqueName: \"kubernetes.io/projected/8541335d-c9c2-49a2-a9c6-37415353ffad-kube-api-access-rwgdg\") pod \"calico-typha-5864dcdc95-l6mc7\" (UID: \"8541335d-c9c2-49a2-a9c6-37415353ffad\") " pod="calico-system/calico-typha-5864dcdc95-l6mc7" Mar 3 13:54:15.986328 kubelet[2857]: E0303 13:54:15.986191 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:15.986328 kubelet[2857]: W0303 13:54:15.986303 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:15.987089 kubelet[2857]: E0303 13:54:15.986446 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:15.997958 kubelet[2857]: E0303 13:54:15.997811 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:15.997958 kubelet[2857]: W0303 13:54:15.997845 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:15.997958 kubelet[2857]: E0303 13:54:15.997871 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.001997 kubelet[2857]: E0303 13:54:16.001250 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.001997 kubelet[2857]: W0303 13:54:16.001303 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.001997 kubelet[2857]: E0303 13:54:16.001332 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.057970 kubelet[2857]: E0303 13:54:16.057612 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.057970 kubelet[2857]: W0303 13:54:16.057666 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.057970 kubelet[2857]: E0303 13:54:16.057697 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.059686 kubelet[2857]: E0303 13:54:16.059510 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.059686 kubelet[2857]: W0303 13:54:16.059542 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.059686 kubelet[2857]: E0303 13:54:16.059584 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.062685 kubelet[2857]: E0303 13:54:16.062255 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.062685 kubelet[2857]: W0303 13:54:16.062313 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.062685 kubelet[2857]: E0303 13:54:16.062335 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.062870 kubelet[2857]: E0303 13:54:16.062843 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.062870 kubelet[2857]: W0303 13:54:16.062858 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.063037 kubelet[2857]: E0303 13:54:16.062873 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.064677 kubelet[2857]: E0303 13:54:16.064124 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.064677 kubelet[2857]: W0303 13:54:16.064156 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.064677 kubelet[2857]: E0303 13:54:16.064171 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.065493 kubelet[2857]: E0303 13:54:16.065251 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.065493 kubelet[2857]: W0303 13:54:16.065285 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.065493 kubelet[2857]: E0303 13:54:16.065302 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.067166 kubelet[2857]: E0303 13:54:16.067106 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.067166 kubelet[2857]: W0303 13:54:16.067150 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.067166 kubelet[2857]: E0303 13:54:16.067166 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.067967 kubelet[2857]: E0303 13:54:16.067851 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.068020 kubelet[2857]: W0303 13:54:16.067889 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.068020 kubelet[2857]: E0303 13:54:16.067992 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.068876 kubelet[2857]: E0303 13:54:16.068745 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.068876 kubelet[2857]: W0303 13:54:16.068862 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.069087 kubelet[2857]: E0303 13:54:16.068884 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.069537 kubelet[2857]: E0303 13:54:16.069407 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.069537 kubelet[2857]: W0303 13:54:16.069456 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.069537 kubelet[2857]: E0303 13:54:16.069473 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.071038 kubelet[2857]: E0303 13:54:16.071000 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.071038 kubelet[2857]: W0303 13:54:16.071037 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.071153 kubelet[2857]: E0303 13:54:16.071053 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.077597 kubelet[2857]: E0303 13:54:16.077116 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.077597 kubelet[2857]: W0303 13:54:16.077425 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.078278 kubelet[2857]: E0303 13:54:16.078103 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.080197 kubelet[2857]: E0303 13:54:16.079399 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.080197 kubelet[2857]: W0303 13:54:16.079416 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.080197 kubelet[2857]: E0303 13:54:16.079430 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.082318 kubelet[2857]: E0303 13:54:16.081123 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.082318 kubelet[2857]: W0303 13:54:16.081140 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.082318 kubelet[2857]: E0303 13:54:16.081155 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.083443 kubelet[2857]: E0303 13:54:16.083059 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.083443 kubelet[2857]: W0303 13:54:16.083096 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.083443 kubelet[2857]: E0303 13:54:16.083111 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.085452 kubelet[2857]: E0303 13:54:16.084423 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.085452 kubelet[2857]: W0303 13:54:16.084438 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.085452 kubelet[2857]: E0303 13:54:16.084489 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.086389 kubelet[2857]: E0303 13:54:16.086338 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.086389 kubelet[2857]: W0303 13:54:16.086354 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.086389 kubelet[2857]: E0303 13:54:16.086368 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.114271 kubelet[2857]: E0303 13:54:16.113060 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 3 13:54:16.114271 kubelet[2857]: W0303 13:54:16.113086 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 3 13:54:16.114271 kubelet[2857]: E0303 13:54:16.113116 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 3 13:54:16.127591 containerd[1567]: time="2026-03-03T13:54:16.127408320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cswvm,Uid:0aa0f926-854c-4bdb-b827-2250299b1f3e,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:16.237687 containerd[1567]: time="2026-03-03T13:54:16.237431873Z" level=info msg="connecting to shim 7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46" address="unix:///run/containerd/s/4b6219366cc1b05f7d070a4e91b3f86a180d340bb6e6aacf4df4da06a5be2071" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:54:16.252849 kubelet[2857]: E0303 13:54:16.251002 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:16.255025 containerd[1567]: time="2026-03-03T13:54:16.254856540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5864dcdc95-l6mc7,Uid:8541335d-c9c2-49a2-a9c6-37415353ffad,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:16.321272 containerd[1567]: time="2026-03-03T13:54:16.319664814Z" level=info msg="connecting to shim 517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9" address="unix:///run/containerd/s/32a518fa650a80a5f0bc6176cddf17f6ba82308a6a431eb3aeb6a0810670599f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:54:16.320304 systemd[1]: Started cri-containerd-7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46.scope - libcontainer container 7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46. Mar 3 13:54:16.455370 systemd[1]: Started cri-containerd-517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9.scope - libcontainer container 517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9. Mar 3 13:54:16.489496 containerd[1567]: time="2026-03-03T13:54:16.489300469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cswvm,Uid:0aa0f926-854c-4bdb-b827-2250299b1f3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\"" Mar 3 13:54:16.499328 containerd[1567]: time="2026-03-03T13:54:16.498654781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 3 13:54:16.630780 containerd[1567]: time="2026-03-03T13:54:16.630407875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5864dcdc95-l6mc7,Uid:8541335d-c9c2-49a2-a9c6-37415353ffad,Namespace:calico-system,Attempt:0,} returns sandbox id \"517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9\"" Mar 3 13:54:16.632349 kubelet[2857]: E0303 13:54:16.632186 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:17.280333 kubelet[2857]: E0303 13:54:17.276820 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:17.729603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633326249.mount: Deactivated successfully. Mar 3 13:54:18.046361 containerd[1567]: time="2026-03-03T13:54:18.045839871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:18.052837 containerd[1567]: time="2026-03-03T13:54:18.052675853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 3 13:54:18.056839 containerd[1567]: time="2026-03-03T13:54:18.056534912Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:18.062048 containerd[1567]: time="2026-03-03T13:54:18.061973013Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:18.066713 containerd[1567]: time="2026-03-03T13:54:18.065104547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.566400574s" Mar 3 13:54:18.066713 containerd[1567]: time="2026-03-03T13:54:18.065301234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 3 13:54:18.067951 containerd[1567]: time="2026-03-03T13:54:18.067846167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 3 13:54:18.083459 containerd[1567]: time="2026-03-03T13:54:18.083338641Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 3 13:54:18.123079 containerd[1567]: time="2026-03-03T13:54:18.121106783Z" level=info msg="Container b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:54:18.145203 containerd[1567]: time="2026-03-03T13:54:18.145079580Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607\"" Mar 3 13:54:18.149672 containerd[1567]: time="2026-03-03T13:54:18.147162883Z" level=info msg="StartContainer for \"b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607\"" Mar 3 13:54:18.149672 containerd[1567]: time="2026-03-03T13:54:18.149461494Z" level=info msg="connecting to shim b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607" address="unix:///run/containerd/s/4b6219366cc1b05f7d070a4e91b3f86a180d340bb6e6aacf4df4da06a5be2071" protocol=ttrpc version=3 Mar 3 13:54:18.220366 systemd[1]: Started cri-containerd-b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607.scope - libcontainer container b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607. Mar 3 13:54:18.463874 containerd[1567]: time="2026-03-03T13:54:18.463655310Z" level=info msg="StartContainer for \"b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607\" returns successfully" Mar 3 13:54:18.493046 systemd[1]: cri-containerd-b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607.scope: Deactivated successfully. Mar 3 13:54:18.503438 containerd[1567]: time="2026-03-03T13:54:18.503344370Z" level=info msg="received container exit event container_id:\"b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607\" id:\"b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607\" pid:3605 exited_at:{seconds:1772546058 nanos:502776797}" Mar 3 13:54:18.600176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607-rootfs.mount: Deactivated successfully. Mar 3 13:54:19.276967 kubelet[2857]: E0303 13:54:19.275310 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:21.275373 kubelet[2857]: E0303 13:54:21.274444 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:21.785421 containerd[1567]: time="2026-03-03T13:54:21.785315158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:21.786747 containerd[1567]: time="2026-03-03T13:54:21.786659967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 3 13:54:21.788941 containerd[1567]: time="2026-03-03T13:54:21.788805603Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:21.795107 containerd[1567]: time="2026-03-03T13:54:21.795008205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:21.796385 containerd[1567]: time="2026-03-03T13:54:21.796259933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.728270841s" Mar 3 13:54:21.796385 containerd[1567]: time="2026-03-03T13:54:21.796343099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 3 13:54:21.798076 containerd[1567]: time="2026-03-03T13:54:21.798037179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 3 13:54:21.827864 containerd[1567]: time="2026-03-03T13:54:21.827686460Z" level=info msg="CreateContainer within sandbox \"517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 3 13:54:21.848348 containerd[1567]: time="2026-03-03T13:54:21.848012102Z" level=info msg="Container 76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:54:21.869842 containerd[1567]: time="2026-03-03T13:54:21.869748798Z" level=info msg="CreateContainer within sandbox \"517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4\"" Mar 3 13:54:21.870528 containerd[1567]: time="2026-03-03T13:54:21.870414935Z" level=info msg="StartContainer for \"76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4\"" Mar 3 13:54:21.872692 containerd[1567]: time="2026-03-03T13:54:21.872541156Z" level=info msg="connecting to shim 76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4" address="unix:///run/containerd/s/32a518fa650a80a5f0bc6176cddf17f6ba82308a6a431eb3aeb6a0810670599f" protocol=ttrpc version=3 Mar 3 13:54:21.915793 systemd[1]: Started cri-containerd-76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4.scope - libcontainer container 76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4. Mar 3 13:54:22.044754 containerd[1567]: time="2026-03-03T13:54:22.044521764Z" level=info msg="StartContainer for \"76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4\" returns successfully" Mar 3 13:54:23.436225 kubelet[2857]: E0303 13:54:23.428204 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:23.436225 kubelet[2857]: E0303 13:54:23.434871 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:25.678475 kubelet[2857]: E0303 13:54:25.646247 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:26.348249 kubelet[2857]: E0303 13:54:26.345412 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:27.954058 kubelet[2857]: E0303 13:54:27.948852 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.196s" Mar 3 13:54:28.071799 kubelet[2857]: E0303 13:54:28.071047 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:28.107728 kubelet[2857]: I0303 13:54:28.105274 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5864dcdc95-l6mc7" podStartSLOduration=7.942415222 podStartE2EDuration="13.104258831s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:54:16.635862985 +0000 UTC m=+80.048471228" lastFinishedPulling="2026-03-03 13:54:21.797706594 +0000 UTC m=+85.210314837" observedRunningTime="2026-03-03 13:54:28.08795206 +0000 UTC m=+91.500560303" watchObservedRunningTime="2026-03-03 13:54:28.104258831 +0000 UTC m=+91.516867084" Mar 3 13:54:28.952470 kubelet[2857]: E0303 13:54:28.952351 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:30.298481 kubelet[2857]: E0303 13:54:30.298029 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:32.279193 kubelet[2857]: E0303 13:54:32.278221 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:34.275615 kubelet[2857]: E0303 13:54:34.275504 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:36.276444 kubelet[2857]: E0303 13:54:36.275006 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:38.275058 kubelet[2857]: E0303 13:54:38.274501 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:40.276735 kubelet[2857]: E0303 13:54:40.273186 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:41.147046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807944524.mount: Deactivated successfully. Mar 3 13:54:41.231632 containerd[1567]: time="2026-03-03T13:54:41.231392141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:41.237282 containerd[1567]: time="2026-03-03T13:54:41.237055761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 3 13:54:41.239393 containerd[1567]: time="2026-03-03T13:54:41.239308256Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:41.245977 containerd[1567]: time="2026-03-03T13:54:41.243692718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:41.245977 containerd[1567]: time="2026-03-03T13:54:41.245755208Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 19.447481839s" Mar 3 13:54:41.245977 containerd[1567]: time="2026-03-03T13:54:41.245786416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 3 13:54:41.258988 containerd[1567]: time="2026-03-03T13:54:41.258943464Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 3 13:54:41.422660 containerd[1567]: time="2026-03-03T13:54:41.420714149Z" level=info msg="Container 2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:54:41.660296 containerd[1567]: time="2026-03-03T13:54:41.660213522Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919\"" Mar 3 13:54:41.661847 containerd[1567]: time="2026-03-03T13:54:41.661629947Z" level=info msg="StartContainer for \"2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919\"" Mar 3 13:54:41.664885 containerd[1567]: time="2026-03-03T13:54:41.664665525Z" level=info msg="connecting to shim 2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919" address="unix:///run/containerd/s/4b6219366cc1b05f7d070a4e91b3f86a180d340bb6e6aacf4df4da06a5be2071" protocol=ttrpc version=3 Mar 3 13:54:41.745814 systemd[1]: Started cri-containerd-2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919.scope - libcontainer container 2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919. Mar 3 13:54:41.974274 containerd[1567]: time="2026-03-03T13:54:41.974148696Z" level=info msg="StartContainer for \"2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919\" returns successfully" Mar 3 13:54:42.106324 systemd[1]: cri-containerd-2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919.scope: Deactivated successfully. Mar 3 13:54:42.139093 containerd[1567]: time="2026-03-03T13:54:42.138313445Z" level=info msg="received container exit event container_id:\"2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919\" id:\"2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919\" pid:3710 exited_at:{seconds:1772546082 nanos:122520832}" Mar 3 13:54:42.275798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919-rootfs.mount: Deactivated successfully. Mar 3 13:54:42.279647 kubelet[2857]: E0303 13:54:42.279307 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:43.193271 containerd[1567]: time="2026-03-03T13:54:43.187303384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 3 13:54:44.279476 kubelet[2857]: E0303 13:54:44.277285 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:46.275892 kubelet[2857]: E0303 13:54:46.273399 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:48.279196 kubelet[2857]: E0303 13:54:48.277143 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:50.275466 kubelet[2857]: E0303 13:54:50.274009 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:51.680860 containerd[1567]: time="2026-03-03T13:54:51.679640410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:51.682875 containerd[1567]: time="2026-03-03T13:54:51.682832540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 3 13:54:51.685756 containerd[1567]: time="2026-03-03T13:54:51.684220719Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:51.689990 containerd[1567]: time="2026-03-03T13:54:51.689697742Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:54:51.692355 containerd[1567]: time="2026-03-03T13:54:51.692276840Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 8.50007127s" Mar 3 13:54:51.692355 containerd[1567]: time="2026-03-03T13:54:51.692314470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 3 13:54:51.723991 containerd[1567]: time="2026-03-03T13:54:51.723781985Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 3 13:54:51.787038 containerd[1567]: time="2026-03-03T13:54:51.786555197Z" level=info msg="Container 930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:54:51.846420 containerd[1567]: time="2026-03-03T13:54:51.846369748Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a\"" Mar 3 13:54:51.850779 containerd[1567]: time="2026-03-03T13:54:51.848052355Z" level=info msg="StartContainer for \"930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a\"" Mar 3 13:54:51.852166 containerd[1567]: time="2026-03-03T13:54:51.852140356Z" level=info msg="connecting to shim 930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a" address="unix:///run/containerd/s/4b6219366cc1b05f7d070a4e91b3f86a180d340bb6e6aacf4df4da06a5be2071" protocol=ttrpc version=3 Mar 3 13:54:51.985424 systemd[1]: Started cri-containerd-930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a.scope - libcontainer container 930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a. Mar 3 13:54:52.282154 kubelet[2857]: E0303 13:54:52.281269 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:52.429697 containerd[1567]: time="2026-03-03T13:54:52.428709291Z" level=info msg="StartContainer for \"930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a\" returns successfully" Mar 3 13:54:54.276252 kubelet[2857]: E0303 13:54:54.275499 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:54.934379 systemd[1]: cri-containerd-930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a.scope: Deactivated successfully. Mar 3 13:54:54.936062 systemd[1]: cri-containerd-930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a.scope: Consumed 1.434s CPU time, 185.7M memory peak, 5.7M read from disk, 177M written to disk. Mar 3 13:54:54.988003 containerd[1567]: time="2026-03-03T13:54:54.987112170Z" level=info msg="received container exit event container_id:\"930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a\" id:\"930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a\" pid:3769 exited_at:{seconds:1772546094 nanos:986716412}" Mar 3 13:54:55.099525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a-rootfs.mount: Deactivated successfully. Mar 3 13:54:55.151519 kubelet[2857]: I0303 13:54:55.151409 2857 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 3 13:54:55.452848 systemd[1]: Created slice kubepods-besteffort-pod88976114_2a63_4e90_9ec3_c15d733ef749.slice - libcontainer container kubepods-besteffort-pod88976114_2a63_4e90_9ec3_c15d733ef749.slice. Mar 3 13:54:55.475540 systemd[1]: Created slice kubepods-besteffort-podac3696c3_2c99_4adc_b76d_fc52fa6fb25a.slice - libcontainer container kubepods-besteffort-podac3696c3_2c99_4adc_b76d_fc52fa6fb25a.slice. Mar 3 13:54:55.514985 kubelet[2857]: I0303 13:54:55.514777 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-nginx-config\") pod \"whisker-7f5f8cc5b9-qhdtn\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " pod="calico-system/whisker-7f5f8cc5b9-qhdtn" Mar 3 13:54:55.515561 kubelet[2857]: I0303 13:54:55.514992 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th6nb\" (UniqueName: \"kubernetes.io/projected/88976114-2a63-4e90-9ec3-c15d733ef749-kube-api-access-th6nb\") pod \"whisker-7f5f8cc5b9-qhdtn\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " pod="calico-system/whisker-7f5f8cc5b9-qhdtn" Mar 3 13:54:55.515561 kubelet[2857]: I0303 13:54:55.515051 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-backend-key-pair\") pod \"whisker-7f5f8cc5b9-qhdtn\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " pod="calico-system/whisker-7f5f8cc5b9-qhdtn" Mar 3 13:54:55.515561 kubelet[2857]: I0303 13:54:55.515086 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-ca-bundle\") pod \"whisker-7f5f8cc5b9-qhdtn\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " pod="calico-system/whisker-7f5f8cc5b9-qhdtn" Mar 3 13:54:55.520686 systemd[1]: Created slice kubepods-besteffort-pod3eab8073_1931_493b_a085_eabcfb11ddb8.slice - libcontainer container kubepods-besteffort-pod3eab8073_1931_493b_a085_eabcfb11ddb8.slice. Mar 3 13:54:55.547267 systemd[1]: Created slice kubepods-besteffort-pod85572497_fd61_4805_83e2_fd71e6c3af99.slice - libcontainer container kubepods-besteffort-pod85572497_fd61_4805_83e2_fd71e6c3af99.slice. Mar 3 13:54:55.578061 systemd[1]: Created slice kubepods-besteffort-podaaa974c7_96c4_4052_b9a9_1875c2f7ed66.slice - libcontainer container kubepods-besteffort-podaaa974c7_96c4_4052_b9a9_1875c2f7ed66.slice. Mar 3 13:54:55.618804 kubelet[2857]: I0303 13:54:55.618489 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aaa974c7-96c4-4052-b9a9-1875c2f7ed66-calico-apiserver-certs\") pod \"calico-apiserver-68c9b68fff-5cpns\" (UID: \"aaa974c7-96c4-4052-b9a9-1875c2f7ed66\") " pod="calico-system/calico-apiserver-68c9b68fff-5cpns" Mar 3 13:54:55.619884 kubelet[2857]: I0303 13:54:55.619110 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac3696c3-2c99-4adc-b76d-fc52fa6fb25a-calico-apiserver-certs\") pod \"calico-apiserver-68c9b68fff-xwbpz\" (UID: \"ac3696c3-2c99-4adc-b76d-fc52fa6fb25a\") " pod="calico-system/calico-apiserver-68c9b68fff-xwbpz" Mar 3 13:54:55.619884 kubelet[2857]: I0303 13:54:55.619141 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c55rb\" (UniqueName: \"kubernetes.io/projected/85572497-fd61-4805-83e2-fd71e6c3af99-kube-api-access-c55rb\") pod \"goldmane-cccfbd5cf-dqcvl\" (UID: \"85572497-fd61-4805-83e2-fd71e6c3af99\") " pod="calico-system/goldmane-cccfbd5cf-dqcvl" Mar 3 13:54:55.623088 kubelet[2857]: I0303 13:54:55.621738 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7gj\" (UniqueName: \"kubernetes.io/projected/aaa974c7-96c4-4052-b9a9-1875c2f7ed66-kube-api-access-vs7gj\") pod \"calico-apiserver-68c9b68fff-5cpns\" (UID: \"aaa974c7-96c4-4052-b9a9-1875c2f7ed66\") " pod="calico-system/calico-apiserver-68c9b68fff-5cpns" Mar 3 13:54:55.623088 kubelet[2857]: I0303 13:54:55.621781 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85572497-fd61-4805-83e2-fd71e6c3af99-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-dqcvl\" (UID: \"85572497-fd61-4805-83e2-fd71e6c3af99\") " pod="calico-system/goldmane-cccfbd5cf-dqcvl" Mar 3 13:54:55.623088 kubelet[2857]: I0303 13:54:55.621857 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf9xh\" (UniqueName: \"kubernetes.io/projected/ac3696c3-2c99-4adc-b76d-fc52fa6fb25a-kube-api-access-vf9xh\") pod \"calico-apiserver-68c9b68fff-xwbpz\" (UID: \"ac3696c3-2c99-4adc-b76d-fc52fa6fb25a\") " pod="calico-system/calico-apiserver-68c9b68fff-xwbpz" Mar 3 13:54:55.634782 kubelet[2857]: I0303 13:54:55.633828 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3eab8073-1931-493b-a085-eabcfb11ddb8-tigera-ca-bundle\") pod \"calico-kube-controllers-54b9f97489-lmdc7\" (UID: \"3eab8073-1931-493b-a085-eabcfb11ddb8\") " pod="calico-system/calico-kube-controllers-54b9f97489-lmdc7" Mar 3 13:54:55.638159 kubelet[2857]: I0303 13:54:55.638127 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/85572497-fd61-4805-83e2-fd71e6c3af99-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-dqcvl\" (UID: \"85572497-fd61-4805-83e2-fd71e6c3af99\") " pod="calico-system/goldmane-cccfbd5cf-dqcvl" Mar 3 13:54:55.638851 kubelet[2857]: I0303 13:54:55.638739 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m4t9\" (UniqueName: \"kubernetes.io/projected/3eab8073-1931-493b-a085-eabcfb11ddb8-kube-api-access-4m4t9\") pod \"calico-kube-controllers-54b9f97489-lmdc7\" (UID: \"3eab8073-1931-493b-a085-eabcfb11ddb8\") " pod="calico-system/calico-kube-controllers-54b9f97489-lmdc7" Mar 3 13:54:55.639012 kubelet[2857]: I0303 13:54:55.638848 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/85572497-fd61-4805-83e2-fd71e6c3af99-config\") pod \"goldmane-cccfbd5cf-dqcvl\" (UID: \"85572497-fd61-4805-83e2-fd71e6c3af99\") " pod="calico-system/goldmane-cccfbd5cf-dqcvl" Mar 3 13:54:55.655848 containerd[1567]: time="2026-03-03T13:54:55.655784473Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 3 13:54:55.728141 systemd[1]: Created slice kubepods-burstable-pod0d3a646c_782f_4cf1_be06_96da412da3c6.slice - libcontainer container kubepods-burstable-pod0d3a646c_782f_4cf1_be06_96da412da3c6.slice. Mar 3 13:54:55.733567 systemd[1]: Created slice kubepods-burstable-podb9eea95f_f6e8_4b48_a1e5_9dbed8decbb6.slice - libcontainer container kubepods-burstable-podb9eea95f_f6e8_4b48_a1e5_9dbed8decbb6.slice. Mar 3 13:54:55.740792 kubelet[2857]: I0303 13:54:55.740301 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d3a646c-782f-4cf1-be06-96da412da3c6-config-volume\") pod \"coredns-66bc5c9577-hg5q2\" (UID: \"0d3a646c-782f-4cf1-be06-96da412da3c6\") " pod="kube-system/coredns-66bc5c9577-hg5q2" Mar 3 13:54:55.746734 kubelet[2857]: I0303 13:54:55.744409 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mtf4\" (UniqueName: \"kubernetes.io/projected/0d3a646c-782f-4cf1-be06-96da412da3c6-kube-api-access-4mtf4\") pod \"coredns-66bc5c9577-hg5q2\" (UID: \"0d3a646c-782f-4cf1-be06-96da412da3c6\") " pod="kube-system/coredns-66bc5c9577-hg5q2" Mar 3 13:54:55.746734 kubelet[2857]: I0303 13:54:55.745227 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6-config-volume\") pod \"coredns-66bc5c9577-drhn2\" (UID: \"b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6\") " pod="kube-system/coredns-66bc5c9577-drhn2" Mar 3 13:54:55.746734 kubelet[2857]: I0303 13:54:55.745412 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4fdr\" (UniqueName: \"kubernetes.io/projected/b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6-kube-api-access-t4fdr\") pod \"coredns-66bc5c9577-drhn2\" (UID: \"b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6\") " pod="kube-system/coredns-66bc5c9577-drhn2" Mar 3 13:54:55.755692 containerd[1567]: time="2026-03-03T13:54:55.750683418Z" level=info msg="Container 860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:54:55.788865 containerd[1567]: time="2026-03-03T13:54:55.788733123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5f8cc5b9-qhdtn,Uid:88976114-2a63-4e90-9ec3-c15d733ef749,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:55.797800 containerd[1567]: time="2026-03-03T13:54:55.796406297Z" level=info msg="CreateContainer within sandbox \"7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339\"" Mar 3 13:54:55.801978 containerd[1567]: time="2026-03-03T13:54:55.801861207Z" level=info msg="StartContainer for \"860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339\"" Mar 3 13:54:55.824549 containerd[1567]: time="2026-03-03T13:54:55.824503679Z" level=info msg="connecting to shim 860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339" address="unix:///run/containerd/s/4b6219366cc1b05f7d070a4e91b3f86a180d340bb6e6aacf4df4da06a5be2071" protocol=ttrpc version=3 Mar 3 13:54:55.888873 containerd[1567]: time="2026-03-03T13:54:55.888664590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dqcvl,Uid:85572497-fd61-4805-83e2-fd71e6c3af99,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:55.890185 containerd[1567]: time="2026-03-03T13:54:55.889535834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b9f97489-lmdc7,Uid:3eab8073-1931-493b-a085-eabcfb11ddb8,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:55.930659 systemd[1]: Started cri-containerd-860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339.scope - libcontainer container 860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339. Mar 3 13:54:55.933291 containerd[1567]: time="2026-03-03T13:54:55.932065457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-5cpns,Uid:aaa974c7-96c4-4052-b9a9-1875c2f7ed66,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:56.083010 kubelet[2857]: E0303 13:54:56.082860 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:56.087276 containerd[1567]: time="2026-03-03T13:54:56.087223230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-drhn2,Uid:b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6,Namespace:kube-system,Attempt:0,}" Mar 3 13:54:56.116544 kubelet[2857]: E0303 13:54:56.114846 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:54:56.120047 containerd[1567]: time="2026-03-03T13:54:56.119962091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hg5q2,Uid:0d3a646c-782f-4cf1-be06-96da412da3c6,Namespace:kube-system,Attempt:0,}" Mar 3 13:54:56.135478 containerd[1567]: time="2026-03-03T13:54:56.135432287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-xwbpz,Uid:ac3696c3-2c99-4adc-b76d-fc52fa6fb25a,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:56.292055 systemd[1]: Created slice kubepods-besteffort-poddeebe8a3_257b_493a_a62d_87fcddfaf3ce.slice - libcontainer container kubepods-besteffort-poddeebe8a3_257b_493a_a62d_87fcddfaf3ce.slice. Mar 3 13:54:56.334870 containerd[1567]: time="2026-03-03T13:54:56.331500790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg6h7,Uid:deebe8a3-257b-493a-a62d-87fcddfaf3ce,Namespace:calico-system,Attempt:0,}" Mar 3 13:54:56.472057 containerd[1567]: time="2026-03-03T13:54:56.471787791Z" level=error msg="Failed to destroy network for sandbox \"095842cc75ecade41cefaef1b4a83834241fbdace5b6bf1beca33959e75773d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.480710 systemd[1]: run-netns-cni\x2da4a14bea\x2d18fa\x2d41f8\x2dab0d\x2d3a321a0e151b.mount: Deactivated successfully. Mar 3 13:54:56.496074 containerd[1567]: time="2026-03-03T13:54:56.495379183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f5f8cc5b9-qhdtn,Uid:88976114-2a63-4e90-9ec3-c15d733ef749,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"095842cc75ecade41cefaef1b4a83834241fbdace5b6bf1beca33959e75773d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.506060 containerd[1567]: time="2026-03-03T13:54:56.505400444Z" level=info msg="StartContainer for \"860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339\" returns successfully" Mar 3 13:54:56.506060 containerd[1567]: time="2026-03-03T13:54:56.505512113Z" level=error msg="Failed to destroy network for sandbox \"a376541368e799255678abc57ab10ad24ed071c3b712528e526af2f3ce7af34d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.540819 kubelet[2857]: E0303 13:54:56.536656 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095842cc75ecade41cefaef1b4a83834241fbdace5b6bf1beca33959e75773d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.540819 kubelet[2857]: E0303 13:54:56.536856 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095842cc75ecade41cefaef1b4a83834241fbdace5b6bf1beca33959e75773d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5f8cc5b9-qhdtn" Mar 3 13:54:56.540819 kubelet[2857]: E0303 13:54:56.537867 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"095842cc75ecade41cefaef1b4a83834241fbdace5b6bf1beca33959e75773d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f5f8cc5b9-qhdtn" Mar 3 13:54:56.542803 kubelet[2857]: E0303 13:54:56.538644 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7f5f8cc5b9-qhdtn_calico-system(88976114-2a63-4e90-9ec3-c15d733ef749)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7f5f8cc5b9-qhdtn_calico-system(88976114-2a63-4e90-9ec3-c15d733ef749)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"095842cc75ecade41cefaef1b4a83834241fbdace5b6bf1beca33959e75773d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7f5f8cc5b9-qhdtn" podUID="88976114-2a63-4e90-9ec3-c15d733ef749" Mar 3 13:54:56.542803 kubelet[2857]: E0303 13:54:56.542190 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a376541368e799255678abc57ab10ad24ed071c3b712528e526af2f3ce7af34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.542803 kubelet[2857]: E0303 13:54:56.542272 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a376541368e799255678abc57ab10ad24ed071c3b712528e526af2f3ce7af34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-dqcvl" Mar 3 13:54:56.543378 containerd[1567]: time="2026-03-03T13:54:56.541480400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dqcvl,Uid:85572497-fd61-4805-83e2-fd71e6c3af99,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a376541368e799255678abc57ab10ad24ed071c3b712528e526af2f3ce7af34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.543519 kubelet[2857]: E0303 13:54:56.542297 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a376541368e799255678abc57ab10ad24ed071c3b712528e526af2f3ce7af34d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-dqcvl" Mar 3 13:54:56.543519 kubelet[2857]: E0303 13:54:56.542714 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-dqcvl_calico-system(85572497-fd61-4805-83e2-fd71e6c3af99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-dqcvl_calico-system(85572497-fd61-4805-83e2-fd71e6c3af99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a376541368e799255678abc57ab10ad24ed071c3b712528e526af2f3ce7af34d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-dqcvl" podUID="85572497-fd61-4805-83e2-fd71e6c3af99" Mar 3 13:54:56.561710 containerd[1567]: time="2026-03-03T13:54:56.558176036Z" level=error msg="Failed to destroy network for sandbox \"e49c321b1a93cecc74a9f45b515a1eafaa7d0e06b0d90c5c045f8f15cfa17ac0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.594231 containerd[1567]: time="2026-03-03T13:54:56.593118469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b9f97489-lmdc7,Uid:3eab8073-1931-493b-a085-eabcfb11ddb8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49c321b1a93cecc74a9f45b515a1eafaa7d0e06b0d90c5c045f8f15cfa17ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.599440 kubelet[2857]: E0303 13:54:56.596798 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49c321b1a93cecc74a9f45b515a1eafaa7d0e06b0d90c5c045f8f15cfa17ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.599440 kubelet[2857]: E0303 13:54:56.597036 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49c321b1a93cecc74a9f45b515a1eafaa7d0e06b0d90c5c045f8f15cfa17ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b9f97489-lmdc7" Mar 3 13:54:56.599440 kubelet[2857]: E0303 13:54:56.597060 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49c321b1a93cecc74a9f45b515a1eafaa7d0e06b0d90c5c045f8f15cfa17ac0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b9f97489-lmdc7" Mar 3 13:54:56.605326 kubelet[2857]: E0303 13:54:56.597122 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54b9f97489-lmdc7_calico-system(3eab8073-1931-493b-a085-eabcfb11ddb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54b9f97489-lmdc7_calico-system(3eab8073-1931-493b-a085-eabcfb11ddb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e49c321b1a93cecc74a9f45b515a1eafaa7d0e06b0d90c5c045f8f15cfa17ac0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b9f97489-lmdc7" podUID="3eab8073-1931-493b-a085-eabcfb11ddb8" Mar 3 13:54:56.674885 containerd[1567]: time="2026-03-03T13:54:56.674628161Z" level=error msg="Failed to destroy network for sandbox \"cbfb4b95d357014fea8bbb05f7ca573ef28b82df6dde596d3821b4efa7e41239\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.685387 containerd[1567]: time="2026-03-03T13:54:56.685141802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-drhn2,Uid:b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfb4b95d357014fea8bbb05f7ca573ef28b82df6dde596d3821b4efa7e41239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.687571 kubelet[2857]: E0303 13:54:56.687409 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfb4b95d357014fea8bbb05f7ca573ef28b82df6dde596d3821b4efa7e41239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.687571 kubelet[2857]: E0303 13:54:56.687531 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfb4b95d357014fea8bbb05f7ca573ef28b82df6dde596d3821b4efa7e41239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-drhn2" Mar 3 13:54:56.687571 kubelet[2857]: E0303 13:54:56.687561 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfb4b95d357014fea8bbb05f7ca573ef28b82df6dde596d3821b4efa7e41239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-drhn2" Mar 3 13:54:56.687793 kubelet[2857]: E0303 13:54:56.687682 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-drhn2_kube-system(b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-drhn2_kube-system(b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbfb4b95d357014fea8bbb05f7ca573ef28b82df6dde596d3821b4efa7e41239\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-drhn2" podUID="b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6" Mar 3 13:54:56.731779 containerd[1567]: time="2026-03-03T13:54:56.731716944Z" level=error msg="Failed to destroy network for sandbox \"dd693219cf536b70408f194ef0b2193e12a55c447b51bf9e99dd5509527b79a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.736146 containerd[1567]: time="2026-03-03T13:54:56.735875247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hg5q2,Uid:0d3a646c-782f-4cf1-be06-96da412da3c6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd693219cf536b70408f194ef0b2193e12a55c447b51bf9e99dd5509527b79a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.740035 kubelet[2857]: E0303 13:54:56.737678 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd693219cf536b70408f194ef0b2193e12a55c447b51bf9e99dd5509527b79a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.740035 kubelet[2857]: E0303 13:54:56.737768 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd693219cf536b70408f194ef0b2193e12a55c447b51bf9e99dd5509527b79a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hg5q2" Mar 3 13:54:56.740035 kubelet[2857]: E0303 13:54:56.737797 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd693219cf536b70408f194ef0b2193e12a55c447b51bf9e99dd5509527b79a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-hg5q2" Mar 3 13:54:56.748201 kubelet[2857]: E0303 13:54:56.744842 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-hg5q2_kube-system(0d3a646c-782f-4cf1-be06-96da412da3c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-hg5q2_kube-system(0d3a646c-782f-4cf1-be06-96da412da3c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd693219cf536b70408f194ef0b2193e12a55c447b51bf9e99dd5509527b79a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-hg5q2" podUID="0d3a646c-782f-4cf1-be06-96da412da3c6" Mar 3 13:54:56.776827 containerd[1567]: time="2026-03-03T13:54:56.776508961Z" level=error msg="Failed to destroy network for sandbox \"7f1e47c4b7c1700dcf4da2822354469a24e2e7098d5b170f88521e1dc3a4a973\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.784290 containerd[1567]: time="2026-03-03T13:54:56.784000225Z" level=error msg="Failed to destroy network for sandbox \"d7a2f66634885d22a2ada6be08da188a70b72449d38af5a696442867b73960ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.791049 containerd[1567]: time="2026-03-03T13:54:56.790855006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-5cpns,Uid:aaa974c7-96c4-4052-b9a9-1875c2f7ed66,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e47c4b7c1700dcf4da2822354469a24e2e7098d5b170f88521e1dc3a4a973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.794808 kubelet[2857]: E0303 13:54:56.791672 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e47c4b7c1700dcf4da2822354469a24e2e7098d5b170f88521e1dc3a4a973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.794808 kubelet[2857]: E0303 13:54:56.791794 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e47c4b7c1700dcf4da2822354469a24e2e7098d5b170f88521e1dc3a4a973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68c9b68fff-5cpns" Mar 3 13:54:56.794808 kubelet[2857]: E0303 13:54:56.791822 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e47c4b7c1700dcf4da2822354469a24e2e7098d5b170f88521e1dc3a4a973\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68c9b68fff-5cpns" Mar 3 13:54:56.795385 kubelet[2857]: E0303 13:54:56.792004 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c9b68fff-5cpns_calico-system(aaa974c7-96c4-4052-b9a9-1875c2f7ed66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c9b68fff-5cpns_calico-system(aaa974c7-96c4-4052-b9a9-1875c2f7ed66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f1e47c4b7c1700dcf4da2822354469a24e2e7098d5b170f88521e1dc3a4a973\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-68c9b68fff-5cpns" podUID="aaa974c7-96c4-4052-b9a9-1875c2f7ed66" Mar 3 13:54:56.799877 containerd[1567]: time="2026-03-03T13:54:56.799814227Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-xwbpz,Uid:ac3696c3-2c99-4adc-b76d-fc52fa6fb25a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a2f66634885d22a2ada6be08da188a70b72449d38af5a696442867b73960ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.800862 kubelet[2857]: E0303 13:54:56.800562 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a2f66634885d22a2ada6be08da188a70b72449d38af5a696442867b73960ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.800862 kubelet[2857]: E0303 13:54:56.800707 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a2f66634885d22a2ada6be08da188a70b72449d38af5a696442867b73960ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68c9b68fff-xwbpz" Mar 3 13:54:56.800862 kubelet[2857]: E0303 13:54:56.800733 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a2f66634885d22a2ada6be08da188a70b72449d38af5a696442867b73960ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-68c9b68fff-xwbpz" Mar 3 13:54:56.801097 kubelet[2857]: E0303 13:54:56.800799 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-68c9b68fff-xwbpz_calico-system(ac3696c3-2c99-4adc-b76d-fc52fa6fb25a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-68c9b68fff-xwbpz_calico-system(ac3696c3-2c99-4adc-b76d-fc52fa6fb25a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7a2f66634885d22a2ada6be08da188a70b72449d38af5a696442867b73960ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-68c9b68fff-xwbpz" podUID="ac3696c3-2c99-4adc-b76d-fc52fa6fb25a" Mar 3 13:54:56.853456 containerd[1567]: time="2026-03-03T13:54:56.853334106Z" level=error msg="Failed to destroy network for sandbox \"0a285f5ac0f777981977959f7e5efc423d73220112f0a533b2d6c90da0824d05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.864858 containerd[1567]: time="2026-03-03T13:54:56.864688064Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg6h7,Uid:deebe8a3-257b-493a-a62d-87fcddfaf3ce,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a285f5ac0f777981977959f7e5efc423d73220112f0a533b2d6c90da0824d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.868795 kubelet[2857]: E0303 13:54:56.867061 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a285f5ac0f777981977959f7e5efc423d73220112f0a533b2d6c90da0824d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 3 13:54:56.868795 kubelet[2857]: E0303 13:54:56.867166 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a285f5ac0f777981977959f7e5efc423d73220112f0a533b2d6c90da0824d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:56.868795 kubelet[2857]: E0303 13:54:56.867193 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a285f5ac0f777981977959f7e5efc423d73220112f0a533b2d6c90da0824d05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bg6h7" Mar 3 13:54:56.869158 kubelet[2857]: E0303 13:54:56.867246 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bg6h7_calico-system(deebe8a3-257b-493a-a62d-87fcddfaf3ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bg6h7_calico-system(deebe8a3-257b-493a-a62d-87fcddfaf3ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a285f5ac0f777981977959f7e5efc423d73220112f0a533b2d6c90da0824d05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bg6h7" podUID="deebe8a3-257b-493a-a62d-87fcddfaf3ce" Mar 3 13:54:57.102509 systemd[1]: run-netns-cni\x2d2c20345a\x2d86a9\x2defb1\x2d225d\x2d8f67ed0316a1.mount: Deactivated successfully. Mar 3 13:54:57.102759 systemd[1]: run-netns-cni\x2d00d4073e\x2d5e6f\x2d86ed\x2db060\x2ded47800f6279.mount: Deactivated successfully. Mar 3 13:54:57.102870 systemd[1]: run-netns-cni\x2d732ff624\x2d81e0\x2d2ba9\x2df3c1\x2de4502bff4a31.mount: Deactivated successfully. Mar 3 13:54:57.104344 systemd[1]: run-netns-cni\x2d6c2bdfde\x2db67f\x2d63b0\x2d2378\x2d45ecc745495a.mount: Deactivated successfully. Mar 3 13:54:57.104525 systemd[1]: run-netns-cni\x2d0dba5b30\x2ddc1b\x2de257\x2d88ee\x2d604f51f14585.mount: Deactivated successfully. Mar 3 13:54:57.104802 systemd[1]: run-netns-cni\x2d26fb014b\x2df2e4\x2d3c72\x2df596\x2daa9d8783995a.mount: Deactivated successfully. Mar 3 13:54:57.106643 systemd[1]: run-netns-cni\x2d3b1a3b37\x2d517c\x2dcb1a\x2da6a0\x2d91f7b970dc10.mount: Deactivated successfully. Mar 3 13:54:57.762733 kubelet[2857]: I0303 13:54:57.758685 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cswvm" podStartSLOduration=7.553797877 podStartE2EDuration="42.758659558s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:54:16.492142683 +0000 UTC m=+79.904750925" lastFinishedPulling="2026-03-03 13:54:51.697004363 +0000 UTC m=+115.109612606" observedRunningTime="2026-03-03 13:54:57.740728281 +0000 UTC m=+121.153336524" watchObservedRunningTime="2026-03-03 13:54:57.758659558 +0000 UTC m=+121.171267801" Mar 3 13:54:57.785888 kubelet[2857]: I0303 13:54:57.780309 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-ca-bundle\") pod \"88976114-2a63-4e90-9ec3-c15d733ef749\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " Mar 3 13:54:57.785888 kubelet[2857]: I0303 13:54:57.780400 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-nginx-config\") pod \"88976114-2a63-4e90-9ec3-c15d733ef749\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " Mar 3 13:54:57.785888 kubelet[2857]: I0303 13:54:57.780430 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-backend-key-pair\") pod \"88976114-2a63-4e90-9ec3-c15d733ef749\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " Mar 3 13:54:57.785888 kubelet[2857]: I0303 13:54:57.780459 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th6nb\" (UniqueName: \"kubernetes.io/projected/88976114-2a63-4e90-9ec3-c15d733ef749-kube-api-access-th6nb\") pod \"88976114-2a63-4e90-9ec3-c15d733ef749\" (UID: \"88976114-2a63-4e90-9ec3-c15d733ef749\") " Mar 3 13:54:57.785888 kubelet[2857]: I0303 13:54:57.785094 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "88976114-2a63-4e90-9ec3-c15d733ef749" (UID: "88976114-2a63-4e90-9ec3-c15d733ef749"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:54:57.826978 systemd[1]: var-lib-kubelet-pods-88976114\x2d2a63\x2d4e90\x2d9ec3\x2dc15d733ef749-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 3 13:54:57.830198 kubelet[2857]: I0303 13:54:57.828104 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "88976114-2a63-4e90-9ec3-c15d733ef749" (UID: "88976114-2a63-4e90-9ec3-c15d733ef749"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 3 13:54:57.836746 kubelet[2857]: I0303 13:54:57.833508 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "88976114-2a63-4e90-9ec3-c15d733ef749" (UID: "88976114-2a63-4e90-9ec3-c15d733ef749"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 3 13:54:57.853096 systemd[1]: var-lib-kubelet-pods-88976114\x2d2a63\x2d4e90\x2d9ec3\x2dc15d733ef749-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dth6nb.mount: Deactivated successfully. Mar 3 13:54:57.857070 kubelet[2857]: I0303 13:54:57.855657 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88976114-2a63-4e90-9ec3-c15d733ef749-kube-api-access-th6nb" (OuterVolumeSpecName: "kube-api-access-th6nb") pod "88976114-2a63-4e90-9ec3-c15d733ef749" (UID: "88976114-2a63-4e90-9ec3-c15d733ef749"). InnerVolumeSpecName "kube-api-access-th6nb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 3 13:54:57.881447 kubelet[2857]: I0303 13:54:57.880840 2857 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 3 13:54:57.881447 kubelet[2857]: I0303 13:54:57.881239 2857 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/88976114-2a63-4e90-9ec3-c15d733ef749-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 3 13:54:57.881447 kubelet[2857]: I0303 13:54:57.881263 2857 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/88976114-2a63-4e90-9ec3-c15d733ef749-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 3 13:54:57.881447 kubelet[2857]: I0303 13:54:57.881278 2857 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-th6nb\" (UniqueName: \"kubernetes.io/projected/88976114-2a63-4e90-9ec3-c15d733ef749-kube-api-access-th6nb\") on node \"localhost\" DevicePath \"\"" Mar 3 13:54:58.593115 systemd[1]: Removed slice kubepods-besteffort-pod88976114_2a63_4e90_9ec3_c15d733ef749.slice - libcontainer container kubepods-besteffort-pod88976114_2a63_4e90_9ec3_c15d733ef749.slice. Mar 3 13:54:58.985415 systemd[1]: Created slice kubepods-besteffort-pod371f5321_58eb_4128_98e7_951b74a8f887.slice - libcontainer container kubepods-besteffort-pod371f5321_58eb_4128_98e7_951b74a8f887.slice. Mar 3 13:54:59.119310 kubelet[2857]: I0303 13:54:59.118435 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/371f5321-58eb-4128-98e7-951b74a8f887-nginx-config\") pod \"whisker-694d8d5fb6-6jzdh\" (UID: \"371f5321-58eb-4128-98e7-951b74a8f887\") " pod="calico-system/whisker-694d8d5fb6-6jzdh" Mar 3 13:54:59.119310 kubelet[2857]: I0303 13:54:59.118556 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp468\" (UniqueName: \"kubernetes.io/projected/371f5321-58eb-4128-98e7-951b74a8f887-kube-api-access-sp468\") pod \"whisker-694d8d5fb6-6jzdh\" (UID: \"371f5321-58eb-4128-98e7-951b74a8f887\") " pod="calico-system/whisker-694d8d5fb6-6jzdh" Mar 3 13:54:59.119310 kubelet[2857]: I0303 13:54:59.118667 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/371f5321-58eb-4128-98e7-951b74a8f887-whisker-ca-bundle\") pod \"whisker-694d8d5fb6-6jzdh\" (UID: \"371f5321-58eb-4128-98e7-951b74a8f887\") " pod="calico-system/whisker-694d8d5fb6-6jzdh" Mar 3 13:54:59.119310 kubelet[2857]: I0303 13:54:59.118862 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/371f5321-58eb-4128-98e7-951b74a8f887-whisker-backend-key-pair\") pod \"whisker-694d8d5fb6-6jzdh\" (UID: \"371f5321-58eb-4128-98e7-951b74a8f887\") " pod="calico-system/whisker-694d8d5fb6-6jzdh" Mar 3 13:54:59.283074 kubelet[2857]: I0303 13:54:59.282574 2857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88976114-2a63-4e90-9ec3-c15d733ef749" path="/var/lib/kubelet/pods/88976114-2a63-4e90-9ec3-c15d733ef749/volumes" Mar 3 13:54:59.319679 containerd[1567]: time="2026-03-03T13:54:59.319582731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-694d8d5fb6-6jzdh,Uid:371f5321-58eb-4128-98e7-951b74a8f887,Namespace:calico-system,Attempt:0,}" Mar 3 13:55:00.099786 systemd-networkd[1451]: cali998f3f05fa5: Link UP Mar 3 13:55:00.115152 systemd-networkd[1451]: cali998f3f05fa5: Gained carrier Mar 3 13:55:00.213246 containerd[1567]: 2026-03-03 13:54:59.461 [ERROR][4184] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 3 13:55:00.213246 containerd[1567]: 2026-03-03 13:54:59.586 [INFO][4184] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0 whisker-694d8d5fb6- calico-system 371f5321-58eb-4128-98e7-951b74a8f887 1181 0 2026-03-03 13:54:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:694d8d5fb6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-694d8d5fb6-6jzdh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali998f3f05fa5 [] [] }} ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-" Mar 3 13:55:00.213246 containerd[1567]: 2026-03-03 13:54:59.586 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.213246 containerd[1567]: 2026-03-03 13:54:59.788 [INFO][4196] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" HandleID="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Workload="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.822 [INFO][4196] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" HandleID="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Workload="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-694d8d5fb6-6jzdh", "timestamp":"2026-03-03 13:54:59.788574045 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d6c60)} Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.823 [INFO][4196] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.823 [INFO][4196] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.823 [INFO][4196] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.832 [INFO][4196] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" host="localhost" Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.878 [INFO][4196] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.909 [INFO][4196] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.922 [INFO][4196] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.935 [INFO][4196] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:00.213694 containerd[1567]: 2026-03-03 13:54:59.939 [INFO][4196] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" host="localhost" Mar 3 13:55:00.214434 containerd[1567]: 2026-03-03 13:54:59.949 [INFO][4196] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002 Mar 3 13:55:00.214434 containerd[1567]: 2026-03-03 13:54:59.976 [INFO][4196] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" host="localhost" Mar 3 13:55:00.214434 containerd[1567]: 2026-03-03 13:55:00.006 [INFO][4196] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" host="localhost" Mar 3 13:55:00.214434 containerd[1567]: 2026-03-03 13:55:00.007 [INFO][4196] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" host="localhost" Mar 3 13:55:00.214434 containerd[1567]: 2026-03-03 13:55:00.007 [INFO][4196] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:00.214434 containerd[1567]: 2026-03-03 13:55:00.007 [INFO][4196] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" HandleID="k8s-pod-network.eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Workload="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.214752 containerd[1567]: 2026-03-03 13:55:00.030 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0", GenerateName:"whisker-694d8d5fb6-", Namespace:"calico-system", SelfLink:"", UID:"371f5321-58eb-4128-98e7-951b74a8f887", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"694d8d5fb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-694d8d5fb6-6jzdh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali998f3f05fa5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:00.214752 containerd[1567]: 2026-03-03 13:55:00.030 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.216149 containerd[1567]: 2026-03-03 13:55:00.030 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali998f3f05fa5 ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.216149 containerd[1567]: 2026-03-03 13:55:00.123 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.216282 containerd[1567]: 2026-03-03 13:55:00.127 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0", GenerateName:"whisker-694d8d5fb6-", Namespace:"calico-system", SelfLink:"", UID:"371f5321-58eb-4128-98e7-951b74a8f887", ResourceVersion:"1181", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"694d8d5fb6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002", Pod:"whisker-694d8d5fb6-6jzdh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali998f3f05fa5", MAC:"96:a7:51:b1:c9:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:00.216549 containerd[1567]: 2026-03-03 13:55:00.178 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" Namespace="calico-system" Pod="whisker-694d8d5fb6-6jzdh" WorkloadEndpoint="localhost-k8s-whisker--694d8d5fb6--6jzdh-eth0" Mar 3 13:55:00.547970 containerd[1567]: time="2026-03-03T13:55:00.546716042Z" level=info msg="connecting to shim eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002" address="unix:///run/containerd/s/ecd6c5aac56af117e6d32a01c51cd6138a9641c81826849c24aa5909a305a922" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:00.758651 systemd[1]: Started cri-containerd-eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002.scope - libcontainer container eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002. Mar 3 13:55:00.890409 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:01.018776 containerd[1567]: time="2026-03-03T13:55:01.018514394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-694d8d5fb6-6jzdh,Uid:371f5321-58eb-4128-98e7-951b74a8f887,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002\"" Mar 3 13:55:01.022881 containerd[1567]: time="2026-03-03T13:55:01.022849468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 3 13:55:01.764391 systemd-networkd[1451]: cali998f3f05fa5: Gained IPv6LL Mar 3 13:55:03.659232 containerd[1567]: time="2026-03-03T13:55:03.659048342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:03.674166 containerd[1567]: time="2026-03-03T13:55:03.660867413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 3 13:55:03.674166 containerd[1567]: time="2026-03-03T13:55:03.673071198Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:03.678795 containerd[1567]: time="2026-03-03T13:55:03.678760447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:03.680068 containerd[1567]: time="2026-03-03T13:55:03.680035394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.656963483s" Mar 3 13:55:03.680876 containerd[1567]: time="2026-03-03T13:55:03.680383954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 3 13:55:03.703770 containerd[1567]: time="2026-03-03T13:55:03.700476770Z" level=info msg="CreateContainer within sandbox \"eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 3 13:55:03.759143 containerd[1567]: time="2026-03-03T13:55:03.759086877Z" level=info msg="Container c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:03.792652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037935267.mount: Deactivated successfully. Mar 3 13:55:03.827244 containerd[1567]: time="2026-03-03T13:55:03.827159143Z" level=info msg="CreateContainer within sandbox \"eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37\"" Mar 3 13:55:03.829671 containerd[1567]: time="2026-03-03T13:55:03.829583783Z" level=info msg="StartContainer for \"c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37\"" Mar 3 13:55:03.840804 containerd[1567]: time="2026-03-03T13:55:03.835419595Z" level=info msg="connecting to shim c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37" address="unix:///run/containerd/s/ecd6c5aac56af117e6d32a01c51cd6138a9641c81826849c24aa5909a305a922" protocol=ttrpc version=3 Mar 3 13:55:04.370120 systemd[1]: Started cri-containerd-c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37.scope - libcontainer container c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37. Mar 3 13:55:04.382882 systemd-networkd[1451]: vxlan.calico: Link UP Mar 3 13:55:04.383648 systemd-networkd[1451]: vxlan.calico: Gained carrier Mar 3 13:55:04.847666 containerd[1567]: time="2026-03-03T13:55:04.844524312Z" level=info msg="StartContainer for \"c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37\" returns successfully" Mar 3 13:55:04.868157 containerd[1567]: time="2026-03-03T13:55:04.866658828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 3 13:55:05.789541 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Mar 3 13:55:07.289313 containerd[1567]: time="2026-03-03T13:55:07.287867286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg6h7,Uid:deebe8a3-257b-493a-a62d-87fcddfaf3ce,Namespace:calico-system,Attempt:0,}" Mar 3 13:55:07.888056 systemd-networkd[1451]: calia8bf5c21fe7: Link UP Mar 3 13:55:07.899571 systemd-networkd[1451]: calia8bf5c21fe7: Gained carrier Mar 3 13:55:07.922225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611727667.mount: Deactivated successfully. Mar 3 13:55:07.966980 containerd[1567]: 2026-03-03 13:55:07.502 [INFO][4541] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bg6h7-eth0 csi-node-driver- calico-system deebe8a3-257b-493a-a62d-87fcddfaf3ce 916 0 2026-03-03 13:54:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bg6h7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia8bf5c21fe7 [] [] }} ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-" Mar 3 13:55:07.966980 containerd[1567]: 2026-03-03 13:55:07.502 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:07.966980 containerd[1567]: 2026-03-03 13:55:07.632 [INFO][4554] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" HandleID="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Workload="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.648 [INFO][4554] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" HandleID="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Workload="localhost-k8s-csi--node--driver--bg6h7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bg6h7", "timestamp":"2026-03-03 13:55:07.632124176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00033ec60)} Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.649 [INFO][4554] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.649 [INFO][4554] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.650 [INFO][4554] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.673 [INFO][4554] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" host="localhost" Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.704 [INFO][4554] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.741 [INFO][4554] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.751 [INFO][4554] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.768 [INFO][4554] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:07.968519 containerd[1567]: 2026-03-03 13:55:07.771 [INFO][4554] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" host="localhost" Mar 3 13:55:07.971718 containerd[1567]: 2026-03-03 13:55:07.792 [INFO][4554] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a Mar 3 13:55:07.971718 containerd[1567]: 2026-03-03 13:55:07.827 [INFO][4554] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" host="localhost" Mar 3 13:55:07.971718 containerd[1567]: 2026-03-03 13:55:07.853 [INFO][4554] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" host="localhost" Mar 3 13:55:07.971718 containerd[1567]: 2026-03-03 13:55:07.854 [INFO][4554] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" host="localhost" Mar 3 13:55:07.971718 containerd[1567]: 2026-03-03 13:55:07.856 [INFO][4554] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:07.971718 containerd[1567]: 2026-03-03 13:55:07.856 [INFO][4554] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" HandleID="k8s-pod-network.19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Workload="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:07.973090 containerd[1567]: 2026-03-03 13:55:07.869 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bg6h7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deebe8a3-257b-493a-a62d-87fcddfaf3ce", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bg6h7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8bf5c21fe7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:07.973320 containerd[1567]: 2026-03-03 13:55:07.869 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:07.973320 containerd[1567]: 2026-03-03 13:55:07.870 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8bf5c21fe7 ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:07.973320 containerd[1567]: 2026-03-03 13:55:07.901 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:07.973406 containerd[1567]: 2026-03-03 13:55:07.902 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bg6h7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"deebe8a3-257b-493a-a62d-87fcddfaf3ce", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a", Pod:"csi-node-driver-bg6h7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8bf5c21fe7", MAC:"2a:ea:b3:3d:81:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:07.973534 containerd[1567]: 2026-03-03 13:55:07.952 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" Namespace="calico-system" Pod="csi-node-driver-bg6h7" WorkloadEndpoint="localhost-k8s-csi--node--driver--bg6h7-eth0" Mar 3 13:55:08.081268 containerd[1567]: time="2026-03-03T13:55:08.081207132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:08.093030 containerd[1567]: time="2026-03-03T13:55:08.092876852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 3 13:55:08.097995 containerd[1567]: time="2026-03-03T13:55:08.097780074Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:08.134817 containerd[1567]: time="2026-03-03T13:55:08.134716165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:08.139335 containerd[1567]: time="2026-03-03T13:55:08.138603005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.271796953s" Mar 3 13:55:08.139335 containerd[1567]: time="2026-03-03T13:55:08.138731194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 3 13:55:08.159767 containerd[1567]: time="2026-03-03T13:55:08.158500835Z" level=info msg="CreateContainer within sandbox \"eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 3 13:55:08.224826 containerd[1567]: time="2026-03-03T13:55:08.222864594Z" level=info msg="Container 75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:08.234067 containerd[1567]: time="2026-03-03T13:55:08.232007103Z" level=info msg="connecting to shim 19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a" address="unix:///run/containerd/s/84fc96e52af31f478b5c0491c095fed881a28859c90f140a1a2a7a8ee3739964" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:08.232771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount998299673.mount: Deactivated successfully. Mar 3 13:55:08.265199 containerd[1567]: time="2026-03-03T13:55:08.265137674Z" level=info msg="CreateContainer within sandbox \"eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52\"" Mar 3 13:55:08.276867 containerd[1567]: time="2026-03-03T13:55:08.272183040Z" level=info msg="StartContainer for \"75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52\"" Mar 3 13:55:08.281132 containerd[1567]: time="2026-03-03T13:55:08.280739918Z" level=info msg="connecting to shim 75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52" address="unix:///run/containerd/s/ecd6c5aac56af117e6d32a01c51cd6138a9641c81826849c24aa5909a305a922" protocol=ttrpc version=3 Mar 3 13:55:08.282824 containerd[1567]: time="2026-03-03T13:55:08.282794526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b9f97489-lmdc7,Uid:3eab8073-1931-493b-a085-eabcfb11ddb8,Namespace:calico-system,Attempt:0,}" Mar 3 13:55:08.376198 systemd[1]: Started cri-containerd-19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a.scope - libcontainer container 19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a. Mar 3 13:55:08.396439 systemd[1]: Started cri-containerd-75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52.scope - libcontainer container 75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52. Mar 3 13:55:08.467299 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:08.566815 containerd[1567]: time="2026-03-03T13:55:08.566765422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg6h7,Uid:deebe8a3-257b-493a-a62d-87fcddfaf3ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a\"" Mar 3 13:55:08.580060 containerd[1567]: time="2026-03-03T13:55:08.580017691Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 3 13:55:08.662668 containerd[1567]: time="2026-03-03T13:55:08.661391337Z" level=info msg="StartContainer for \"75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52\" returns successfully" Mar 3 13:55:08.911741 systemd-networkd[1451]: cali97d50ea59e5: Link UP Mar 3 13:55:08.912459 systemd-networkd[1451]: cali97d50ea59e5: Gained carrier Mar 3 13:55:08.986140 containerd[1567]: 2026-03-03 13:55:08.504 [INFO][4616] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0 calico-kube-controllers-54b9f97489- calico-system 3eab8073-1931-493b-a085-eabcfb11ddb8 1122 0 2026-03-03 13:54:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54b9f97489 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-54b9f97489-lmdc7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali97d50ea59e5 [] [] }} ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-" Mar 3 13:55:08.986140 containerd[1567]: 2026-03-03 13:55:08.509 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:08.986140 containerd[1567]: 2026-03-03 13:55:08.643 [INFO][4666] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" HandleID="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Workload="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.664 [INFO][4666] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" HandleID="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Workload="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0007940a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-54b9f97489-lmdc7", "timestamp":"2026-03-03 13:55:08.643174058 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a46e0)} Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.664 [INFO][4666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.664 [INFO][4666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.664 [INFO][4666] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.696 [INFO][4666] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" host="localhost" Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.728 [INFO][4666] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.759 [INFO][4666] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.766 [INFO][4666] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:08.986452 containerd[1567]: 2026-03-03 13:55:08.775 [INFO][4666] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.776 [INFO][4666] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" host="localhost" Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.786 [INFO][4666] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48 Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.835 [INFO][4666] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" host="localhost" Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.879 [INFO][4666] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" host="localhost" Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.879 [INFO][4666] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" host="localhost" Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.880 [INFO][4666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:08.987206 containerd[1567]: 2026-03-03 13:55:08.880 [INFO][4666] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" HandleID="k8s-pod-network.1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Workload="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:08.987397 containerd[1567]: 2026-03-03 13:55:08.900 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0", GenerateName:"calico-kube-controllers-54b9f97489-", Namespace:"calico-system", SelfLink:"", UID:"3eab8073-1931-493b-a085-eabcfb11ddb8", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b9f97489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-54b9f97489-lmdc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97d50ea59e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:08.987541 containerd[1567]: 2026-03-03 13:55:08.900 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:08.987541 containerd[1567]: 2026-03-03 13:55:08.900 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali97d50ea59e5 ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:08.987541 containerd[1567]: 2026-03-03 13:55:08.925 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:08.987707 containerd[1567]: 2026-03-03 13:55:08.925 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0", GenerateName:"calico-kube-controllers-54b9f97489-", Namespace:"calico-system", SelfLink:"", UID:"3eab8073-1931-493b-a085-eabcfb11ddb8", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b9f97489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48", Pod:"calico-kube-controllers-54b9f97489-lmdc7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali97d50ea59e5", MAC:"aa:ec:c6:03:ae:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:08.987840 containerd[1567]: 2026-03-03 13:55:08.963 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" Namespace="calico-system" Pod="calico-kube-controllers-54b9f97489-lmdc7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--54b9f97489--lmdc7-eth0" Mar 3 13:55:09.090279 kubelet[2857]: I0303 13:55:09.090038 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-694d8d5fb6-6jzdh" podStartSLOduration=3.9698859 podStartE2EDuration="11.090015277s" podCreationTimestamp="2026-03-03 13:54:58 +0000 UTC" firstStartedPulling="2026-03-03 13:55:01.02230403 +0000 UTC m=+124.434912273" lastFinishedPulling="2026-03-03 13:55:08.142433407 +0000 UTC m=+131.555041650" observedRunningTime="2026-03-03 13:55:09.085838739 +0000 UTC m=+132.498446992" watchObservedRunningTime="2026-03-03 13:55:09.090015277 +0000 UTC m=+132.502623539" Mar 3 13:55:09.177242 systemd-networkd[1451]: calia8bf5c21fe7: Gained IPv6LL Mar 3 13:55:09.188688 containerd[1567]: time="2026-03-03T13:55:09.187318827Z" level=info msg="connecting to shim 1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48" address="unix:///run/containerd/s/e927ac249e9feadd21436bd9e9f4e1f5870edf7c3a6e348a9c851189ba46a89f" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:09.287655 containerd[1567]: time="2026-03-03T13:55:09.287437700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dqcvl,Uid:85572497-fd61-4805-83e2-fd71e6c3af99,Namespace:calico-system,Attempt:0,}" Mar 3 13:55:09.292753 kubelet[2857]: E0303 13:55:09.292664 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:09.294974 containerd[1567]: time="2026-03-03T13:55:09.294513661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-5cpns,Uid:aaa974c7-96c4-4052-b9a9-1875c2f7ed66,Namespace:calico-system,Attempt:0,}" Mar 3 13:55:09.297564 containerd[1567]: time="2026-03-03T13:55:09.296697702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hg5q2,Uid:0d3a646c-782f-4cf1-be06-96da412da3c6,Namespace:kube-system,Attempt:0,}" Mar 3 13:55:09.299129 containerd[1567]: time="2026-03-03T13:55:09.298997224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-xwbpz,Uid:ac3696c3-2c99-4adc-b76d-fc52fa6fb25a,Namespace:calico-system,Attempt:0,}" Mar 3 13:55:09.431207 systemd[1]: Started cri-containerd-1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48.scope - libcontainer container 1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48. Mar 3 13:55:09.560330 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:09.773475 containerd[1567]: time="2026-03-03T13:55:09.773379192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b9f97489-lmdc7,Uid:3eab8073-1931-493b-a085-eabcfb11ddb8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48\"" Mar 3 13:55:09.971276 systemd-networkd[1451]: cali17a68cd57bf: Link UP Mar 3 13:55:09.978852 systemd-networkd[1451]: cali17a68cd57bf: Gained carrier Mar 3 13:55:10.080140 containerd[1567]: 2026-03-03 13:55:09.650 [INFO][4754] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--hg5q2-eth0 coredns-66bc5c9577- kube-system 0d3a646c-782f-4cf1-be06-96da412da3c6 1126 0 2026-03-03 13:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-hg5q2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17a68cd57bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-" Mar 3 13:55:10.080140 containerd[1567]: 2026-03-03 13:55:09.651 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.080140 containerd[1567]: 2026-03-03 13:55:09.788 [INFO][4816] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" HandleID="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Workload="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.818 [INFO][4816] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" HandleID="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Workload="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-hg5q2", "timestamp":"2026-03-03 13:55:09.788419571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005f0420)} Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.818 [INFO][4816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.818 [INFO][4816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.818 [INFO][4816] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.827 [INFO][4816] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" host="localhost" Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.840 [INFO][4816] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.855 [INFO][4816] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.860 [INFO][4816] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.872 [INFO][4816] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:10.081596 containerd[1567]: 2026-03-03 13:55:09.872 [INFO][4816] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" host="localhost" Mar 3 13:55:10.085422 containerd[1567]: 2026-03-03 13:55:09.876 [INFO][4816] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724 Mar 3 13:55:10.085422 containerd[1567]: 2026-03-03 13:55:09.895 [INFO][4816] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" host="localhost" Mar 3 13:55:10.085422 containerd[1567]: 2026-03-03 13:55:09.945 [INFO][4816] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" host="localhost" Mar 3 13:55:10.085422 containerd[1567]: 2026-03-03 13:55:09.948 [INFO][4816] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" host="localhost" Mar 3 13:55:10.085422 containerd[1567]: 2026-03-03 13:55:09.949 [INFO][4816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:10.085422 containerd[1567]: 2026-03-03 13:55:09.949 [INFO][4816] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" HandleID="k8s-pod-network.ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Workload="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.085677 containerd[1567]: 2026-03-03 13:55:09.964 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--hg5q2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0d3a646c-782f-4cf1-be06-96da412da3c6", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-hg5q2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17a68cd57bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:10.085677 containerd[1567]: 2026-03-03 13:55:09.965 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.085677 containerd[1567]: 2026-03-03 13:55:09.965 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17a68cd57bf ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.085677 containerd[1567]: 2026-03-03 13:55:09.977 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.085677 containerd[1567]: 2026-03-03 13:55:09.991 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--hg5q2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"0d3a646c-782f-4cf1-be06-96da412da3c6", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724", Pod:"coredns-66bc5c9577-hg5q2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17a68cd57bf", MAC:"52:d0:cb:6f:77:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:10.085677 containerd[1567]: 2026-03-03 13:55:10.068 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" Namespace="kube-system" Pod="coredns-66bc5c9577-hg5q2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--hg5q2-eth0" Mar 3 13:55:10.303332 containerd[1567]: time="2026-03-03T13:55:10.283123791Z" level=info msg="connecting to shim ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724" address="unix:///run/containerd/s/346c62d75eb50ade4c665826dce6155651a91277e153435418f230cfcde18a87" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:10.336088 systemd-networkd[1451]: cali3266d05fdc0: Link UP Mar 3 13:55:10.336809 systemd-networkd[1451]: cali3266d05fdc0: Gained carrier Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.655 [INFO][4746] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0 calico-apiserver-68c9b68fff- calico-system ac3696c3-2c99-4adc-b76d-fc52fa6fb25a 1121 0 2026-03-03 13:54:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c9b68fff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68c9b68fff-xwbpz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3266d05fdc0 [] [] }} ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.655 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.789 [INFO][4818] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" HandleID="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Workload="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.818 [INFO][4818] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" HandleID="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Workload="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e740), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-68c9b68fff-xwbpz", "timestamp":"2026-03-03 13:55:09.789422329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe6e0)} Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.819 [INFO][4818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.949 [INFO][4818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.949 [INFO][4818] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:09.971 [INFO][4818] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.041 [INFO][4818] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.080 [INFO][4818] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.089 [INFO][4818] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.101 [INFO][4818] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.101 [INFO][4818] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.134 [INFO][4818] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0 Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.176 [INFO][4818] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.306 [INFO][4818] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.307 [INFO][4818] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" host="localhost" Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.307 [INFO][4818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:10.504113 containerd[1567]: 2026-03-03 13:55:10.307 [INFO][4818] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" HandleID="k8s-pod-network.81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Workload="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.517748 containerd[1567]: 2026-03-03 13:55:10.327 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0", GenerateName:"calico-apiserver-68c9b68fff-", Namespace:"calico-system", SelfLink:"", UID:"ac3696c3-2c99-4adc-b76d-fc52fa6fb25a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c9b68fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68c9b68fff-xwbpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3266d05fdc0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:10.517748 containerd[1567]: 2026-03-03 13:55:10.328 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.517748 containerd[1567]: 2026-03-03 13:55:10.328 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3266d05fdc0 ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.517748 containerd[1567]: 2026-03-03 13:55:10.338 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.517748 containerd[1567]: 2026-03-03 13:55:10.339 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0", GenerateName:"calico-apiserver-68c9b68fff-", Namespace:"calico-system", SelfLink:"", UID:"ac3696c3-2c99-4adc-b76d-fc52fa6fb25a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c9b68fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0", Pod:"calico-apiserver-68c9b68fff-xwbpz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3266d05fdc0", MAC:"c2:c6:fc:d1:eb:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:10.517748 containerd[1567]: 2026-03-03 13:55:10.436 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-xwbpz" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--xwbpz-eth0" Mar 3 13:55:10.553263 systemd[1]: Started cri-containerd-ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724.scope - libcontainer container ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724. Mar 3 13:55:10.669060 containerd[1567]: time="2026-03-03T13:55:10.668096550Z" level=info msg="connecting to shim 81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0" address="unix:///run/containerd/s/0c6e829a25d5bb5c5de753f20c09d2daf0bdbd3b110682484766ac896aee82cd" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:10.675091 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:10.717394 systemd-networkd[1451]: cali6365aabb222: Link UP Mar 3 13:55:10.720008 systemd-networkd[1451]: cali6365aabb222: Gained carrier Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:09.591 [INFO][4744] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0 goldmane-cccfbd5cf- calico-system 85572497-fd61-4805-83e2-fd71e6c3af99 1120 0 2026-03-03 13:54:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-cccfbd5cf-dqcvl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6365aabb222 [] [] }} ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:09.591 [INFO][4744] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:09.792 [INFO][4811] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" HandleID="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Workload="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:09.823 [INFO][4811] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" HandleID="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Workload="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-cccfbd5cf-dqcvl", "timestamp":"2026-03-03 13:55:09.792129942 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00019ac60)} Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:09.823 [INFO][4811] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.310 [INFO][4811] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.310 [INFO][4811] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.347 [INFO][4811] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.455 [INFO][4811] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.498 [INFO][4811] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.519 [INFO][4811] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.539 [INFO][4811] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.542 [INFO][4811] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.589 [INFO][4811] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169 Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.641 [INFO][4811] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.685 [INFO][4811] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.686 [INFO][4811] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" host="localhost" Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.686 [INFO][4811] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:10.826736 containerd[1567]: 2026-03-03 13:55:10.686 [INFO][4811] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" HandleID="k8s-pod-network.37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Workload="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.831833 containerd[1567]: 2026-03-03 13:55:10.700 [INFO][4744] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"85572497-fd61-4805-83e2-fd71e6c3af99", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-cccfbd5cf-dqcvl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6365aabb222", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:10.831833 containerd[1567]: 2026-03-03 13:55:10.700 [INFO][4744] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.831833 containerd[1567]: 2026-03-03 13:55:10.700 [INFO][4744] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6365aabb222 ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.831833 containerd[1567]: 2026-03-03 13:55:10.719 [INFO][4744] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.831833 containerd[1567]: 2026-03-03 13:55:10.722 [INFO][4744] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"85572497-fd61-4805-83e2-fd71e6c3af99", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169", Pod:"goldmane-cccfbd5cf-dqcvl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6365aabb222", MAC:"96:fc:6c:da:b2:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:10.831833 containerd[1567]: 2026-03-03 13:55:10.786 [INFO][4744] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" Namespace="calico-system" Pod="goldmane-cccfbd5cf-dqcvl" WorkloadEndpoint="localhost-k8s-goldmane--cccfbd5cf--dqcvl-eth0" Mar 3 13:55:10.842417 systemd[1]: Started cri-containerd-81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0.scope - libcontainer container 81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0. Mar 3 13:55:10.921197 containerd[1567]: time="2026-03-03T13:55:10.921141495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-hg5q2,Uid:0d3a646c-782f-4cf1-be06-96da412da3c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724\"" Mar 3 13:55:10.933777 kubelet[2857]: E0303 13:55:10.932273 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:10.966791 containerd[1567]: time="2026-03-03T13:55:10.964775135Z" level=info msg="CreateContainer within sandbox \"ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:55:10.970244 systemd-networkd[1451]: cali97d50ea59e5: Gained IPv6LL Mar 3 13:55:11.051115 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:11.096188 containerd[1567]: time="2026-03-03T13:55:11.095116332Z" level=info msg="connecting to shim 37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169" address="unix:///run/containerd/s/9663d7e0fa6a03aeedef7e66a0451dbfc66e2f467823aeab7aa75da5f4f58efe" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:11.155777 systemd-networkd[1451]: cali8f4880cc926: Link UP Mar 3 13:55:11.158784 systemd-networkd[1451]: cali8f4880cc926: Gained carrier Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:09.632 [INFO][4770] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0 calico-apiserver-68c9b68fff- calico-system aaa974c7-96c4-4052-b9a9-1875c2f7ed66 1125 0 2026-03-03 13:54:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:68c9b68fff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-68c9b68fff-5cpns eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8f4880cc926 [] [] }} ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:09.633 [INFO][4770] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:09.819 [INFO][4815] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" HandleID="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Workload="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:09.845 [INFO][4815] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" HandleID="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Workload="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f8710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-68c9b68fff-5cpns", "timestamp":"2026-03-03 13:55:09.81944744 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186dc0)} Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:09.845 [INFO][4815] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.687 [INFO][4815] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.687 [INFO][4815] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.721 [INFO][4815] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.783 [INFO][4815] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.867 [INFO][4815] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.892 [INFO][4815] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.906 [INFO][4815] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.916 [INFO][4815] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.923 [INFO][4815] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132 Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:10.971 [INFO][4815] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:11.047 [INFO][4815] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:11.060 [INFO][4815] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" host="localhost" Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:11.064 [INFO][4815] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:11.244028 containerd[1567]: 2026-03-03 13:55:11.065 [INFO][4815] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" HandleID="k8s-pod-network.d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Workload="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.246579 containerd[1567]: 2026-03-03 13:55:11.131 [INFO][4770] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0", GenerateName:"calico-apiserver-68c9b68fff-", Namespace:"calico-system", SelfLink:"", UID:"aaa974c7-96c4-4052-b9a9-1875c2f7ed66", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c9b68fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-68c9b68fff-5cpns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f4880cc926", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:11.246579 containerd[1567]: 2026-03-03 13:55:11.132 [INFO][4770] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.246579 containerd[1567]: 2026-03-03 13:55:11.132 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f4880cc926 ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.246579 containerd[1567]: 2026-03-03 13:55:11.158 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.246579 containerd[1567]: 2026-03-03 13:55:11.160 [INFO][4770] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0", GenerateName:"calico-apiserver-68c9b68fff-", Namespace:"calico-system", SelfLink:"", UID:"aaa974c7-96c4-4052-b9a9-1875c2f7ed66", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 54, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"68c9b68fff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132", Pod:"calico-apiserver-68c9b68fff-5cpns", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8f4880cc926", MAC:"16:83:bd:a9:3e:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:11.246579 containerd[1567]: 2026-03-03 13:55:11.212 [INFO][4770] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" Namespace="calico-system" Pod="calico-apiserver-68c9b68fff-5cpns" WorkloadEndpoint="localhost-k8s-calico--apiserver--68c9b68fff--5cpns-eth0" Mar 3 13:55:11.253749 systemd[1]: Started cri-containerd-37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169.scope - libcontainer container 37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169. Mar 3 13:55:11.290686 kubelet[2857]: E0303 13:55:11.290323 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:11.293242 containerd[1567]: time="2026-03-03T13:55:11.293200087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-drhn2,Uid:b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6,Namespace:kube-system,Attempt:0,}" Mar 3 13:55:11.342062 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:11.382248 containerd[1567]: time="2026-03-03T13:55:11.380227940Z" level=info msg="Container 04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:11.454102 containerd[1567]: time="2026-03-03T13:55:11.450873719Z" level=info msg="CreateContainer within sandbox \"ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c\"" Mar 3 13:55:11.460125 containerd[1567]: time="2026-03-03T13:55:11.459479688Z" level=info msg="StartContainer for \"04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c\"" Mar 3 13:55:11.466779 containerd[1567]: time="2026-03-03T13:55:11.466375942Z" level=info msg="connecting to shim 04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c" address="unix:///run/containerd/s/346c62d75eb50ade4c665826dce6155651a91277e153435418f230cfcde18a87" protocol=ttrpc version=3 Mar 3 13:55:11.539601 containerd[1567]: time="2026-03-03T13:55:11.539429177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-xwbpz,Uid:ac3696c3-2c99-4adc-b76d-fc52fa6fb25a,Namespace:calico-system,Attempt:0,} returns sandbox id \"81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0\"" Mar 3 13:55:11.587277 systemd[1]: Started cri-containerd-04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c.scope - libcontainer container 04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c. Mar 3 13:55:11.593151 containerd[1567]: time="2026-03-03T13:55:11.588318787Z" level=info msg="connecting to shim d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132" address="unix:///run/containerd/s/47fdf0f72b17ef862e71222b628a98c833a9a28aea2b38ea006b2f58a3be02d6" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:11.657273 containerd[1567]: time="2026-03-03T13:55:11.655350880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-dqcvl,Uid:85572497-fd61-4805-83e2-fd71e6c3af99,Namespace:calico-system,Attempt:0,} returns sandbox id \"37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169\"" Mar 3 13:55:11.740144 systemd-networkd[1451]: cali17a68cd57bf: Gained IPv6LL Mar 3 13:55:11.757854 systemd[1]: Started cri-containerd-d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132.scope - libcontainer container d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132. Mar 3 13:55:11.871660 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:11.942544 containerd[1567]: time="2026-03-03T13:55:11.938544811Z" level=info msg="StartContainer for \"04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c\" returns successfully" Mar 3 13:55:12.059235 systemd-networkd[1451]: cali6365aabb222: Gained IPv6LL Mar 3 13:55:12.142245 kubelet[2857]: E0303 13:55:12.142010 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:12.185464 containerd[1567]: time="2026-03-03T13:55:12.182446002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:12.185464 containerd[1567]: time="2026-03-03T13:55:12.183337174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 3 13:55:12.190379 containerd[1567]: time="2026-03-03T13:55:12.190225779Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:12.199448 containerd[1567]: time="2026-03-03T13:55:12.199076403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:12.203091 containerd[1567]: time="2026-03-03T13:55:12.202859476Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 3.622537878s" Mar 3 13:55:12.203091 containerd[1567]: time="2026-03-03T13:55:12.203012211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 3 13:55:12.229510 containerd[1567]: time="2026-03-03T13:55:12.229285150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 3 13:55:12.253472 kubelet[2857]: I0303 13:55:12.250276 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-hg5q2" podStartSLOduration=132.250255563 podStartE2EDuration="2m12.250255563s" podCreationTimestamp="2026-03-03 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:55:12.245305743 +0000 UTC m=+135.657913986" watchObservedRunningTime="2026-03-03 13:55:12.250255563 +0000 UTC m=+135.662863816" Mar 3 13:55:12.255735 containerd[1567]: time="2026-03-03T13:55:12.253274219Z" level=info msg="CreateContainer within sandbox \"19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 3 13:55:12.314158 containerd[1567]: time="2026-03-03T13:55:12.308099678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-68c9b68fff-5cpns,Uid:aaa974c7-96c4-4052-b9a9-1875c2f7ed66,Namespace:calico-system,Attempt:0,} returns sandbox id \"d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132\"" Mar 3 13:55:12.358162 containerd[1567]: time="2026-03-03T13:55:12.357029674Z" level=info msg="Container c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:12.377242 systemd-networkd[1451]: cali3266d05fdc0: Gained IPv6LL Mar 3 13:55:12.410586 containerd[1567]: time="2026-03-03T13:55:12.410211478Z" level=info msg="CreateContainer within sandbox \"19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113\"" Mar 3 13:55:12.429175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount138798360.mount: Deactivated successfully. Mar 3 13:55:12.447838 containerd[1567]: time="2026-03-03T13:55:12.446218005Z" level=info msg="StartContainer for \"c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113\"" Mar 3 13:55:12.474210 containerd[1567]: time="2026-03-03T13:55:12.472710655Z" level=info msg="connecting to shim c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113" address="unix:///run/containerd/s/84fc96e52af31f478b5c0491c095fed881a28859c90f140a1a2a7a8ee3739964" protocol=ttrpc version=3 Mar 3 13:55:12.565204 systemd[1]: Started cri-containerd-c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113.scope - libcontainer container c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113. Mar 3 13:55:12.584873 systemd-networkd[1451]: cali68bccabbaab: Link UP Mar 3 13:55:12.626057 systemd-networkd[1451]: cali68bccabbaab: Gained carrier Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:11.863 [INFO][5039] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--drhn2-eth0 coredns-66bc5c9577- kube-system b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6 1127 0 2026-03-03 13:53:00 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-drhn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali68bccabbaab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:11.869 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.173 [INFO][5153] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" HandleID="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Workload="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.190 [INFO][5153] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" HandleID="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Workload="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005b6e10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-drhn2", "timestamp":"2026-03-03 13:55:12.173066231 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000140f20)} Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.190 [INFO][5153] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.191 [INFO][5153] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.192 [INFO][5153] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.206 [INFO][5153] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.288 [INFO][5153] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.390 [INFO][5153] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.395 [INFO][5153] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.423 [INFO][5153] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.430 [INFO][5153] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.439 [INFO][5153] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.466 [INFO][5153] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.497 [INFO][5153] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.500 [INFO][5153] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" host="localhost" Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.500 [INFO][5153] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 3 13:55:12.674472 containerd[1567]: 2026-03-03 13:55:12.500 [INFO][5153] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" HandleID="k8s-pod-network.1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Workload="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.675504 containerd[1567]: 2026-03-03 13:55:12.542 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--drhn2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-drhn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68bccabbaab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:12.675504 containerd[1567]: 2026-03-03 13:55:12.542 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.675504 containerd[1567]: 2026-03-03 13:55:12.542 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali68bccabbaab ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.675504 containerd[1567]: 2026-03-03 13:55:12.624 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.675504 containerd[1567]: 2026-03-03 13:55:12.631 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--drhn2-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2026, time.March, 3, 13, 53, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c", Pod:"coredns-66bc5c9577-drhn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali68bccabbaab", MAC:"3e:e3:b7:ac:2e:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 3 13:55:12.675504 containerd[1567]: 2026-03-03 13:55:12.658 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" Namespace="kube-system" Pod="coredns-66bc5c9577-drhn2" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--drhn2-eth0" Mar 3 13:55:12.699457 systemd-networkd[1451]: cali8f4880cc926: Gained IPv6LL Mar 3 13:55:12.821142 containerd[1567]: time="2026-03-03T13:55:12.820275492Z" level=info msg="connecting to shim 1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c" address="unix:///run/containerd/s/b9334c144998bb4eb7782607873b5792d2177414248bf253513331342c2a4c57" namespace=k8s.io protocol=ttrpc version=3 Mar 3 13:55:12.885305 systemd[1]: Started cri-containerd-1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c.scope - libcontainer container 1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c. Mar 3 13:55:12.945155 systemd-resolved[1452]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 3 13:55:12.974298 containerd[1567]: time="2026-03-03T13:55:12.974040276Z" level=info msg="StartContainer for \"c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113\" returns successfully" Mar 3 13:55:13.089794 containerd[1567]: time="2026-03-03T13:55:13.089492892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-drhn2,Uid:b9eea95f-f6e8-4b48-a1e5-9dbed8decbb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c\"" Mar 3 13:55:13.092005 kubelet[2857]: E0303 13:55:13.091867 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:13.130707 containerd[1567]: time="2026-03-03T13:55:13.130555579Z" level=info msg="CreateContainer within sandbox \"1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 3 13:55:13.190322 containerd[1567]: time="2026-03-03T13:55:13.190239998Z" level=info msg="Container 1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:13.234988 containerd[1567]: time="2026-03-03T13:55:13.234330143Z" level=info msg="CreateContainer within sandbox \"1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e\"" Mar 3 13:55:13.260987 containerd[1567]: time="2026-03-03T13:55:13.260280862Z" level=info msg="StartContainer for \"1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e\"" Mar 3 13:55:13.279282 containerd[1567]: time="2026-03-03T13:55:13.278445211Z" level=info msg="connecting to shim 1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e" address="unix:///run/containerd/s/b9334c144998bb4eb7782607873b5792d2177414248bf253513331342c2a4c57" protocol=ttrpc version=3 Mar 3 13:55:13.333218 kubelet[2857]: E0303 13:55:13.333178 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:13.396404 systemd[1]: Started cri-containerd-1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e.scope - libcontainer container 1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e. Mar 3 13:55:13.431289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2269363495.mount: Deactivated successfully. Mar 3 13:55:13.632068 containerd[1567]: time="2026-03-03T13:55:13.631798363Z" level=info msg="StartContainer for \"1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e\" returns successfully" Mar 3 13:55:14.281663 kubelet[2857]: E0303 13:55:14.281491 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:14.362077 systemd-networkd[1451]: cali68bccabbaab: Gained IPv6LL Mar 3 13:55:14.446003 kubelet[2857]: E0303 13:55:14.444822 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:14.448784 kubelet[2857]: E0303 13:55:14.448275 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:14.627541 kubelet[2857]: I0303 13:55:14.627201 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-drhn2" podStartSLOduration=134.627177533 podStartE2EDuration="2m14.627177533s" podCreationTimestamp="2026-03-03 13:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-03 13:55:14.49723758 +0000 UTC m=+137.909845863" watchObservedRunningTime="2026-03-03 13:55:14.627177533 +0000 UTC m=+138.039785786" Mar 3 13:55:15.455603 kubelet[2857]: E0303 13:55:15.455436 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:16.468833 kubelet[2857]: E0303 13:55:16.468693 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:17.284397 kubelet[2857]: E0303 13:55:17.284283 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:20.394357 containerd[1567]: time="2026-03-03T13:55:20.394130671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:20.399434 containerd[1567]: time="2026-03-03T13:55:20.399376374Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 3 13:55:20.410157 containerd[1567]: time="2026-03-03T13:55:20.404890737Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:20.436285 containerd[1567]: time="2026-03-03T13:55:20.435721054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:20.437562 containerd[1567]: time="2026-03-03T13:55:20.437524787Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 8.207756236s" Mar 3 13:55:20.437799 containerd[1567]: time="2026-03-03T13:55:20.437771698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 3 13:55:20.447535 containerd[1567]: time="2026-03-03T13:55:20.446459027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:55:20.572191 containerd[1567]: time="2026-03-03T13:55:20.571859871Z" level=info msg="CreateContainer within sandbox \"1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 3 13:55:20.611024 containerd[1567]: time="2026-03-03T13:55:20.606872179Z" level=info msg="Container 0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:20.651794 containerd[1567]: time="2026-03-03T13:55:20.651522327Z" level=info msg="CreateContainer within sandbox \"1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05\"" Mar 3 13:55:20.655091 containerd[1567]: time="2026-03-03T13:55:20.653480579Z" level=info msg="StartContainer for \"0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05\"" Mar 3 13:55:20.656278 containerd[1567]: time="2026-03-03T13:55:20.656246586Z" level=info msg="connecting to shim 0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05" address="unix:///run/containerd/s/e927ac249e9feadd21436bd9e9f4e1f5870edf7c3a6e348a9c851189ba46a89f" protocol=ttrpc version=3 Mar 3 13:55:20.780698 systemd[1]: Started cri-containerd-0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05.scope - libcontainer container 0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05. Mar 3 13:55:21.066366 containerd[1567]: time="2026-03-03T13:55:21.066195219Z" level=info msg="StartContainer for \"0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05\" returns successfully" Mar 3 13:55:21.958981 kubelet[2857]: I0303 13:55:21.957361 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54b9f97489-lmdc7" podStartSLOduration=56.290897462 podStartE2EDuration="1m6.957340103s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:55:09.778310046 +0000 UTC m=+133.190918288" lastFinishedPulling="2026-03-03 13:55:20.444752676 +0000 UTC m=+143.857360929" observedRunningTime="2026-03-03 13:55:21.708341393 +0000 UTC m=+145.120949666" watchObservedRunningTime="2026-03-03 13:55:21.957340103 +0000 UTC m=+145.369948346" Mar 3 13:55:27.497003 containerd[1567]: time="2026-03-03T13:55:27.496819654Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:27.502121 containerd[1567]: time="2026-03-03T13:55:27.501871685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 3 13:55:27.503782 containerd[1567]: time="2026-03-03T13:55:27.503720402Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:27.518569 containerd[1567]: time="2026-03-03T13:55:27.518456259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:27.520533 containerd[1567]: time="2026-03-03T13:55:27.520345304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 7.073613168s" Mar 3 13:55:27.520533 containerd[1567]: time="2026-03-03T13:55:27.520432457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:55:27.524837 containerd[1567]: time="2026-03-03T13:55:27.524777250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 3 13:55:27.564484 containerd[1567]: time="2026-03-03T13:55:27.564382093Z" level=info msg="CreateContainer within sandbox \"81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:55:27.640007 containerd[1567]: time="2026-03-03T13:55:27.639689576Z" level=info msg="Container e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:27.649885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617509767.mount: Deactivated successfully. Mar 3 13:55:27.687414 containerd[1567]: time="2026-03-03T13:55:27.687057391Z" level=info msg="CreateContainer within sandbox \"81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9\"" Mar 3 13:55:27.692849 containerd[1567]: time="2026-03-03T13:55:27.689785477Z" level=info msg="StartContainer for \"e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9\"" Mar 3 13:55:27.692849 containerd[1567]: time="2026-03-03T13:55:27.692445958Z" level=info msg="connecting to shim e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9" address="unix:///run/containerd/s/0c6e829a25d5bb5c5de753f20c09d2daf0bdbd3b110682484766ac896aee82cd" protocol=ttrpc version=3 Mar 3 13:55:27.780474 systemd[1]: Started cri-containerd-e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9.scope - libcontainer container e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9. Mar 3 13:55:28.090600 containerd[1567]: time="2026-03-03T13:55:28.090517003Z" level=info msg="StartContainer for \"e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9\" returns successfully" Mar 3 13:55:29.057475 kubelet[2857]: I0303 13:55:29.057258 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-68c9b68fff-xwbpz" podStartSLOduration=58.082177993 podStartE2EDuration="1m14.057228342s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:55:11.548453453 +0000 UTC m=+134.961061697" lastFinishedPulling="2026-03-03 13:55:27.523503803 +0000 UTC m=+150.936112046" observedRunningTime="2026-03-03 13:55:28.802848321 +0000 UTC m=+152.215456563" watchObservedRunningTime="2026-03-03 13:55:29.057228342 +0000 UTC m=+152.469836586" Mar 3 13:55:29.300954 kubelet[2857]: E0303 13:55:29.299589 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:30.784230 kubelet[2857]: I0303 13:55:30.783359 2857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 3 13:55:32.656354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount916051774.mount: Deactivated successfully. Mar 3 13:55:34.891691 containerd[1567]: time="2026-03-03T13:55:34.891450108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:34.897626 containerd[1567]: time="2026-03-03T13:55:34.897370178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 3 13:55:34.902753 containerd[1567]: time="2026-03-03T13:55:34.902540404Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:34.921946 containerd[1567]: time="2026-03-03T13:55:34.920191866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:34.921946 containerd[1567]: time="2026-03-03T13:55:34.920969877Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 7.38732348s" Mar 3 13:55:34.921946 containerd[1567]: time="2026-03-03T13:55:34.921029879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 3 13:55:34.934193 containerd[1567]: time="2026-03-03T13:55:34.932617271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 3 13:55:34.950082 containerd[1567]: time="2026-03-03T13:55:34.950000114Z" level=info msg="CreateContainer within sandbox \"37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 3 13:55:34.999012 containerd[1567]: time="2026-03-03T13:55:34.998814846Z" level=info msg="Container 3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:35.037465 containerd[1567]: time="2026-03-03T13:55:35.037360928Z" level=info msg="CreateContainer within sandbox \"37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335\"" Mar 3 13:55:35.046613 containerd[1567]: time="2026-03-03T13:55:35.046153876Z" level=info msg="StartContainer for \"3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335\"" Mar 3 13:55:35.051566 containerd[1567]: time="2026-03-03T13:55:35.050544384Z" level=info msg="connecting to shim 3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335" address="unix:///run/containerd/s/9663d7e0fa6a03aeedef7e66a0451dbfc66e2f467823aeab7aa75da5f4f58efe" protocol=ttrpc version=3 Mar 3 13:55:35.162245 systemd[1]: Started cri-containerd-3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335.scope - libcontainer container 3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335. Mar 3 13:55:35.385860 containerd[1567]: time="2026-03-03T13:55:35.385618713Z" level=info msg="StartContainer for \"3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335\" returns successfully" Mar 3 13:55:35.592357 containerd[1567]: time="2026-03-03T13:55:35.592260712Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:35.598372 containerd[1567]: time="2026-03-03T13:55:35.598133012Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 3 13:55:35.613526 containerd[1567]: time="2026-03-03T13:55:35.610221585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 677.474451ms" Mar 3 13:55:35.613526 containerd[1567]: time="2026-03-03T13:55:35.610357620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 3 13:55:35.623866 containerd[1567]: time="2026-03-03T13:55:35.623768436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 3 13:55:35.630101 containerd[1567]: time="2026-03-03T13:55:35.630055139Z" level=info msg="CreateContainer within sandbox \"d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 3 13:55:35.684855 containerd[1567]: time="2026-03-03T13:55:35.684335289Z" level=info msg="Container 98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:35.693888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1223857078.mount: Deactivated successfully. Mar 3 13:55:35.827458 containerd[1567]: time="2026-03-03T13:55:35.827344712Z" level=info msg="CreateContainer within sandbox \"d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1\"" Mar 3 13:55:35.842203 containerd[1567]: time="2026-03-03T13:55:35.842090336Z" level=info msg="StartContainer for \"98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1\"" Mar 3 13:55:35.844799 containerd[1567]: time="2026-03-03T13:55:35.844530637Z" level=info msg="connecting to shim 98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1" address="unix:///run/containerd/s/47fdf0f72b17ef862e71222b628a98c833a9a28aea2b38ea006b2f58a3be02d6" protocol=ttrpc version=3 Mar 3 13:55:35.969813 kubelet[2857]: I0303 13:55:35.969561 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-dqcvl" podStartSLOduration=57.70120032 podStartE2EDuration="1m20.969539105s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:55:11.661167009 +0000 UTC m=+135.073775253" lastFinishedPulling="2026-03-03 13:55:34.929505796 +0000 UTC m=+158.342114038" observedRunningTime="2026-03-03 13:55:35.964156295 +0000 UTC m=+159.376764538" watchObservedRunningTime="2026-03-03 13:55:35.969539105 +0000 UTC m=+159.382147369" Mar 3 13:55:35.983225 systemd[1]: Started cri-containerd-98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1.scope - libcontainer container 98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1. Mar 3 13:55:36.270011 containerd[1567]: time="2026-03-03T13:55:36.266423585Z" level=info msg="StartContainer for \"98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1\" returns successfully" Mar 3 13:55:36.950573 kubelet[2857]: I0303 13:55:36.949588 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-68c9b68fff-5cpns" podStartSLOduration=58.665085582 podStartE2EDuration="1m21.949565548s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:55:12.337996503 +0000 UTC m=+135.750604747" lastFinishedPulling="2026-03-03 13:55:35.62247638 +0000 UTC m=+159.035084713" observedRunningTime="2026-03-03 13:55:36.94852578 +0000 UTC m=+160.361134023" watchObservedRunningTime="2026-03-03 13:55:36.949565548 +0000 UTC m=+160.362173801" Mar 3 13:55:38.391699 containerd[1567]: time="2026-03-03T13:55:38.391199590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:38.394600 containerd[1567]: time="2026-03-03T13:55:38.393249780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 3 13:55:38.398542 containerd[1567]: time="2026-03-03T13:55:38.397457515Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:38.425083 containerd[1567]: time="2026-03-03T13:55:38.424124929Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 3 13:55:38.426133 containerd[1567]: time="2026-03-03T13:55:38.426007998Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.802182297s" Mar 3 13:55:38.426133 containerd[1567]: time="2026-03-03T13:55:38.426098767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 3 13:55:38.436541 containerd[1567]: time="2026-03-03T13:55:38.436412744Z" level=info msg="CreateContainer within sandbox \"19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 3 13:55:38.465141 containerd[1567]: time="2026-03-03T13:55:38.464811968Z" level=info msg="Container 0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be: CDI devices from CRI Config.CDIDevices: []" Mar 3 13:55:38.494151 containerd[1567]: time="2026-03-03T13:55:38.494067172Z" level=info msg="CreateContainer within sandbox \"19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be\"" Mar 3 13:55:38.502151 containerd[1567]: time="2026-03-03T13:55:38.500583075Z" level=info msg="StartContainer for \"0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be\"" Mar 3 13:55:38.504362 containerd[1567]: time="2026-03-03T13:55:38.504278497Z" level=info msg="connecting to shim 0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be" address="unix:///run/containerd/s/84fc96e52af31f478b5c0491c095fed881a28859c90f140a1a2a7a8ee3739964" protocol=ttrpc version=3 Mar 3 13:55:38.579140 systemd[1]: Started cri-containerd-0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be.scope - libcontainer container 0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be. Mar 3 13:55:38.850537 containerd[1567]: time="2026-03-03T13:55:38.850457014Z" level=info msg="StartContainer for \"0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be\" returns successfully" Mar 3 13:55:39.598836 kubelet[2857]: I0303 13:55:39.598097 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bg6h7" podStartSLOduration=54.747671246 podStartE2EDuration="1m24.598077489s" podCreationTimestamp="2026-03-03 13:54:15 +0000 UTC" firstStartedPulling="2026-03-03 13:55:08.577812814 +0000 UTC m=+131.990421057" lastFinishedPulling="2026-03-03 13:55:38.428219047 +0000 UTC m=+161.840827300" observedRunningTime="2026-03-03 13:55:39.011223118 +0000 UTC m=+162.423831360" watchObservedRunningTime="2026-03-03 13:55:39.598077489 +0000 UTC m=+163.010685732" Mar 3 13:55:39.690777 kubelet[2857]: I0303 13:55:39.690539 2857 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 3 13:55:39.693860 kubelet[2857]: I0303 13:55:39.693641 2857 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 3 13:55:41.274775 kubelet[2857]: E0303 13:55:41.274185 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:55:56.131816 systemd[1]: Started sshd@9-10.0.0.111:22-10.0.0.1:35694.service - OpenSSH per-connection server daemon (10.0.0.1:35694). Mar 3 13:55:56.416864 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 35694 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:55:56.421876 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:55:56.445163 systemd-logind[1542]: New session 10 of user core. Mar 3 13:55:56.462783 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 3 13:55:57.788103 sshd[5830]: Connection closed by 10.0.0.1 port 35694 Mar 3 13:55:57.788517 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Mar 3 13:55:57.799083 systemd[1]: sshd@9-10.0.0.111:22-10.0.0.1:35694.service: Deactivated successfully. Mar 3 13:55:57.807887 systemd[1]: session-10.scope: Deactivated successfully. Mar 3 13:55:57.812782 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Mar 3 13:55:57.819757 systemd-logind[1542]: Removed session 10. Mar 3 13:55:58.273828 kubelet[2857]: E0303 13:55:58.273616 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:02.847618 systemd[1]: Started sshd@10-10.0.0.111:22-10.0.0.1:37184.service - OpenSSH per-connection server daemon (10.0.0.1:37184). Mar 3 13:56:03.066345 sshd[5886]: Accepted publickey for core from 10.0.0.1 port 37184 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:03.073280 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:03.096286 systemd-logind[1542]: New session 11 of user core. Mar 3 13:56:03.158856 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 3 13:56:03.723195 sshd[5889]: Connection closed by 10.0.0.1 port 37184 Mar 3 13:56:03.723305 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:03.735001 systemd[1]: sshd@10-10.0.0.111:22-10.0.0.1:37184.service: Deactivated successfully. Mar 3 13:56:03.740883 systemd[1]: session-11.scope: Deactivated successfully. Mar 3 13:56:03.745674 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Mar 3 13:56:03.757891 systemd-logind[1542]: Removed session 11. Mar 3 13:56:08.761245 systemd[1]: Started sshd@11-10.0.0.111:22-10.0.0.1:37200.service - OpenSSH per-connection server daemon (10.0.0.1:37200). Mar 3 13:56:09.088039 sshd[5932]: Accepted publickey for core from 10.0.0.1 port 37200 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:09.092122 sshd-session[5932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:09.141842 systemd-logind[1542]: New session 12 of user core. Mar 3 13:56:09.148182 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 3 13:56:09.560774 sshd[5935]: Connection closed by 10.0.0.1 port 37200 Mar 3 13:56:09.561385 sshd-session[5932]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:09.576550 systemd[1]: sshd@11-10.0.0.111:22-10.0.0.1:37200.service: Deactivated successfully. Mar 3 13:56:09.576572 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Mar 3 13:56:09.588493 systemd[1]: session-12.scope: Deactivated successfully. Mar 3 13:56:09.637434 systemd-logind[1542]: Removed session 12. Mar 3 13:56:14.600284 systemd[1]: Started sshd@12-10.0.0.111:22-10.0.0.1:55780.service - OpenSSH per-connection server daemon (10.0.0.1:55780). Mar 3 13:56:14.764006 sshd[5949]: Accepted publickey for core from 10.0.0.1 port 55780 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:14.768613 sshd-session[5949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:14.802696 systemd-logind[1542]: New session 13 of user core. Mar 3 13:56:14.825251 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 3 13:56:15.359826 sshd[5952]: Connection closed by 10.0.0.1 port 55780 Mar 3 13:56:15.361554 sshd-session[5949]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:15.385679 systemd[1]: sshd@12-10.0.0.111:22-10.0.0.1:55780.service: Deactivated successfully. Mar 3 13:56:15.400547 systemd[1]: session-13.scope: Deactivated successfully. Mar 3 13:56:15.412378 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Mar 3 13:56:15.432530 systemd-logind[1542]: Removed session 13. Mar 3 13:56:17.292574 kubelet[2857]: E0303 13:56:17.292460 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:20.402474 systemd[1]: Started sshd@13-10.0.0.111:22-10.0.0.1:42466.service - OpenSSH per-connection server daemon (10.0.0.1:42466). Mar 3 13:56:20.573237 sshd[5967]: Accepted publickey for core from 10.0.0.1 port 42466 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:20.581261 sshd-session[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:20.615499 systemd-logind[1542]: New session 14 of user core. Mar 3 13:56:20.632346 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 3 13:56:20.954712 sshd[5970]: Connection closed by 10.0.0.1 port 42466 Mar 3 13:56:20.956190 sshd-session[5967]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:20.969182 systemd[1]: sshd@13-10.0.0.111:22-10.0.0.1:42466.service: Deactivated successfully. Mar 3 13:56:20.973866 systemd[1]: session-14.scope: Deactivated successfully. Mar 3 13:56:20.977099 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Mar 3 13:56:20.983296 systemd-logind[1542]: Removed session 14. Mar 3 13:56:21.289537 kubelet[2857]: E0303 13:56:21.287842 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:25.977596 systemd[1]: Started sshd@14-10.0.0.111:22-10.0.0.1:42468.service - OpenSSH per-connection server daemon (10.0.0.1:42468). Mar 3 13:56:26.148028 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 42468 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:26.150998 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:26.172717 systemd-logind[1542]: New session 15 of user core. Mar 3 13:56:26.193467 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 3 13:56:26.562439 sshd[6026]: Connection closed by 10.0.0.1 port 42468 Mar 3 13:56:26.563135 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:26.571605 systemd[1]: sshd@14-10.0.0.111:22-10.0.0.1:42468.service: Deactivated successfully. Mar 3 13:56:26.580330 systemd[1]: session-15.scope: Deactivated successfully. Mar 3 13:56:26.584019 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Mar 3 13:56:26.588721 systemd-logind[1542]: Removed session 15. Mar 3 13:56:31.277609 kubelet[2857]: E0303 13:56:31.277400 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:31.601266 systemd[1]: Started sshd@15-10.0.0.111:22-10.0.0.1:59554.service - OpenSSH per-connection server daemon (10.0.0.1:59554). Mar 3 13:56:31.925796 sshd[6068]: Accepted publickey for core from 10.0.0.1 port 59554 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:31.930471 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:31.950293 systemd-logind[1542]: New session 16 of user core. Mar 3 13:56:31.968892 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 3 13:56:32.327499 sshd[6071]: Connection closed by 10.0.0.1 port 59554 Mar 3 13:56:32.327194 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:32.346445 systemd[1]: sshd@15-10.0.0.111:22-10.0.0.1:59554.service: Deactivated successfully. Mar 3 13:56:32.347081 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Mar 3 13:56:32.351408 systemd[1]: session-16.scope: Deactivated successfully. Mar 3 13:56:32.357171 systemd-logind[1542]: Removed session 16. Mar 3 13:56:37.383725 systemd[1]: Started sshd@16-10.0.0.111:22-10.0.0.1:59558.service - OpenSSH per-connection server daemon (10.0.0.1:59558). Mar 3 13:56:37.628705 sshd[6099]: Accepted publickey for core from 10.0.0.1 port 59558 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:37.641707 sshd-session[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:37.681398 systemd-logind[1542]: New session 17 of user core. Mar 3 13:56:37.696361 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 3 13:56:38.121744 sshd[6102]: Connection closed by 10.0.0.1 port 59558 Mar 3 13:56:38.121181 sshd-session[6099]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:38.136486 systemd[1]: sshd@16-10.0.0.111:22-10.0.0.1:59558.service: Deactivated successfully. Mar 3 13:56:38.148274 systemd[1]: session-17.scope: Deactivated successfully. Mar 3 13:56:38.159304 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Mar 3 13:56:38.165496 systemd-logind[1542]: Removed session 17. Mar 3 13:56:42.276168 kubelet[2857]: E0303 13:56:42.275790 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:43.157616 systemd[1]: Started sshd@17-10.0.0.111:22-10.0.0.1:41558.service - OpenSSH per-connection server daemon (10.0.0.1:41558). Mar 3 13:56:43.400310 sshd[6170]: Accepted publickey for core from 10.0.0.1 port 41558 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:43.406794 sshd-session[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:43.452778 systemd-logind[1542]: New session 18 of user core. Mar 3 13:56:43.470792 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 3 13:56:44.000225 sshd[6173]: Connection closed by 10.0.0.1 port 41558 Mar 3 13:56:44.007226 sshd-session[6170]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:44.042502 systemd[1]: sshd@17-10.0.0.111:22-10.0.0.1:41558.service: Deactivated successfully. Mar 3 13:56:44.051224 systemd[1]: session-18.scope: Deactivated successfully. Mar 3 13:56:44.058462 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Mar 3 13:56:44.062371 systemd-logind[1542]: Removed session 18. Mar 3 13:56:44.278135 kubelet[2857]: E0303 13:56:44.275719 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:49.035654 systemd[1]: Started sshd@18-10.0.0.111:22-10.0.0.1:41566.service - OpenSSH per-connection server daemon (10.0.0.1:41566). Mar 3 13:56:49.154768 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 41566 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:49.158672 sshd-session[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:49.176677 systemd-logind[1542]: New session 19 of user core. Mar 3 13:56:49.198376 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 3 13:56:49.470230 sshd[6192]: Connection closed by 10.0.0.1 port 41566 Mar 3 13:56:49.474283 sshd-session[6189]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:49.483625 systemd[1]: sshd@18-10.0.0.111:22-10.0.0.1:41566.service: Deactivated successfully. Mar 3 13:56:49.488187 systemd[1]: session-19.scope: Deactivated successfully. Mar 3 13:56:49.497501 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Mar 3 13:56:49.501694 systemd-logind[1542]: Removed session 19. Mar 3 13:56:54.294674 kubelet[2857]: E0303 13:56:54.294393 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:56:54.511320 systemd[1]: Started sshd@19-10.0.0.111:22-10.0.0.1:42058.service - OpenSSH per-connection server daemon (10.0.0.1:42058). Mar 3 13:56:54.677106 sshd[6228]: Accepted publickey for core from 10.0.0.1 port 42058 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:56:54.685650 sshd-session[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:56:54.721457 systemd-logind[1542]: New session 20 of user core. Mar 3 13:56:54.733361 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 3 13:56:55.137554 sshd[6231]: Connection closed by 10.0.0.1 port 42058 Mar 3 13:56:55.140287 sshd-session[6228]: pam_unix(sshd:session): session closed for user core Mar 3 13:56:55.176267 systemd[1]: sshd@19-10.0.0.111:22-10.0.0.1:42058.service: Deactivated successfully. Mar 3 13:56:55.210270 systemd[1]: session-20.scope: Deactivated successfully. Mar 3 13:56:55.239579 systemd-logind[1542]: Session 20 logged out. Waiting for processes to exit. Mar 3 13:56:55.250420 systemd-logind[1542]: Removed session 20. Mar 3 13:57:00.178692 systemd[1]: Started sshd@20-10.0.0.111:22-10.0.0.1:46470.service - OpenSSH per-connection server daemon (10.0.0.1:46470). Mar 3 13:57:00.291297 sshd[6320]: Accepted publickey for core from 10.0.0.1 port 46470 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:00.300280 sshd-session[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:00.328030 systemd-logind[1542]: New session 21 of user core. Mar 3 13:57:00.334457 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 3 13:57:00.689434 sshd[6323]: Connection closed by 10.0.0.1 port 46470 Mar 3 13:57:00.693278 sshd-session[6320]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:00.724567 systemd[1]: sshd@20-10.0.0.111:22-10.0.0.1:46470.service: Deactivated successfully. Mar 3 13:57:00.730791 systemd[1]: session-21.scope: Deactivated successfully. Mar 3 13:57:00.734408 systemd-logind[1542]: Session 21 logged out. Waiting for processes to exit. Mar 3 13:57:00.740648 systemd-logind[1542]: Removed session 21. Mar 3 13:57:05.763714 systemd[1]: Started sshd@21-10.0.0.111:22-10.0.0.1:46482.service - OpenSSH per-connection server daemon (10.0.0.1:46482). Mar 3 13:57:05.933374 sshd[6340]: Accepted publickey for core from 10.0.0.1 port 46482 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:05.934626 sshd-session[6340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:05.961035 systemd-logind[1542]: New session 22 of user core. Mar 3 13:57:05.983229 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 3 13:57:06.352048 sshd[6343]: Connection closed by 10.0.0.1 port 46482 Mar 3 13:57:06.351227 sshd-session[6340]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:06.357999 systemd[1]: sshd@21-10.0.0.111:22-10.0.0.1:46482.service: Deactivated successfully. Mar 3 13:57:06.366036 systemd[1]: session-22.scope: Deactivated successfully. Mar 3 13:57:06.369229 systemd-logind[1542]: Session 22 logged out. Waiting for processes to exit. Mar 3 13:57:06.378698 systemd-logind[1542]: Removed session 22. Mar 3 13:57:11.377657 systemd[1]: Started sshd@22-10.0.0.111:22-10.0.0.1:37668.service - OpenSSH per-connection server daemon (10.0.0.1:37668). Mar 3 13:57:11.501181 sshd[6382]: Accepted publickey for core from 10.0.0.1 port 37668 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:11.504222 sshd-session[6382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:11.528026 systemd-logind[1542]: New session 23 of user core. Mar 3 13:57:11.538403 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 3 13:57:11.845087 sshd[6385]: Connection closed by 10.0.0.1 port 37668 Mar 3 13:57:11.846450 sshd-session[6382]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:11.858208 systemd[1]: sshd@22-10.0.0.111:22-10.0.0.1:37668.service: Deactivated successfully. Mar 3 13:57:11.868670 systemd[1]: session-23.scope: Deactivated successfully. Mar 3 13:57:11.874671 systemd-logind[1542]: Session 23 logged out. Waiting for processes to exit. Mar 3 13:57:11.880508 systemd-logind[1542]: Removed session 23. Mar 3 13:57:15.290370 kubelet[2857]: E0303 13:57:15.290171 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:57:16.871527 systemd[1]: Started sshd@23-10.0.0.111:22-10.0.0.1:37684.service - OpenSSH per-connection server daemon (10.0.0.1:37684). Mar 3 13:57:17.029066 sshd[6400]: Accepted publickey for core from 10.0.0.1 port 37684 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:17.037059 sshd-session[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:17.060356 systemd-logind[1542]: New session 24 of user core. Mar 3 13:57:17.076599 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 3 13:57:17.375574 sshd[6403]: Connection closed by 10.0.0.1 port 37684 Mar 3 13:57:17.373477 sshd-session[6400]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:17.386730 systemd[1]: sshd@23-10.0.0.111:22-10.0.0.1:37684.service: Deactivated successfully. Mar 3 13:57:17.396573 systemd[1]: session-24.scope: Deactivated successfully. Mar 3 13:57:17.402434 systemd-logind[1542]: Session 24 logged out. Waiting for processes to exit. Mar 3 13:57:17.427377 systemd-logind[1542]: Removed session 24. Mar 3 13:57:22.439227 systemd[1]: Started sshd@24-10.0.0.111:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). Mar 3 13:57:22.557027 sshd[6442]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:22.565060 sshd-session[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:22.587019 systemd-logind[1542]: New session 25 of user core. Mar 3 13:57:22.600491 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 3 13:57:22.942275 sshd[6445]: Connection closed by 10.0.0.1 port 51388 Mar 3 13:57:22.947197 sshd-session[6442]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:22.962260 systemd[1]: sshd@24-10.0.0.111:22-10.0.0.1:51388.service: Deactivated successfully. Mar 3 13:57:22.967988 systemd[1]: session-25.scope: Deactivated successfully. Mar 3 13:57:22.974566 systemd-logind[1542]: Session 25 logged out. Waiting for processes to exit. Mar 3 13:57:22.978345 systemd-logind[1542]: Removed session 25. Mar 3 13:57:27.975283 systemd[1]: Started sshd@25-10.0.0.111:22-10.0.0.1:51404.service - OpenSSH per-connection server daemon (10.0.0.1:51404). Mar 3 13:57:28.326085 sshd[6461]: Accepted publickey for core from 10.0.0.1 port 51404 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:28.334272 sshd-session[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:28.363695 systemd-logind[1542]: New session 26 of user core. Mar 3 13:57:28.392994 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 3 13:57:28.979878 sshd[6464]: Connection closed by 10.0.0.1 port 51404 Mar 3 13:57:28.985253 sshd-session[6461]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:29.017362 systemd[1]: sshd@25-10.0.0.111:22-10.0.0.1:51404.service: Deactivated successfully. Mar 3 13:57:29.032574 systemd[1]: session-26.scope: Deactivated successfully. Mar 3 13:57:29.045710 systemd-logind[1542]: Session 26 logged out. Waiting for processes to exit. Mar 3 13:57:29.051455 systemd-logind[1542]: Removed session 26. Mar 3 13:57:32.276452 kubelet[2857]: E0303 13:57:32.275676 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:57:34.028556 systemd[1]: Started sshd@26-10.0.0.111:22-10.0.0.1:33108.service - OpenSSH per-connection server daemon (10.0.0.1:33108). Mar 3 13:57:34.342257 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 33108 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:34.348332 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:34.377766 systemd-logind[1542]: New session 27 of user core. Mar 3 13:57:34.384496 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 3 13:57:34.928131 sshd[6507]: Connection closed by 10.0.0.1 port 33108 Mar 3 13:57:34.930146 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:34.938035 systemd[1]: sshd@26-10.0.0.111:22-10.0.0.1:33108.service: Deactivated successfully. Mar 3 13:57:34.950837 systemd[1]: session-27.scope: Deactivated successfully. Mar 3 13:57:34.964470 systemd-logind[1542]: Session 27 logged out. Waiting for processes to exit. Mar 3 13:57:34.974363 systemd-logind[1542]: Removed session 27. Mar 3 13:57:35.307347 kubelet[2857]: E0303 13:57:35.305486 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:57:39.580552 containerd[1567]: time="2026-03-03T13:57:39.552357363Z" level=warning msg="container event discarded" container=e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df type=CONTAINER_CREATED_EVENT Mar 3 13:57:39.638326 containerd[1567]: time="2026-03-03T13:57:39.638214448Z" level=warning msg="container event discarded" container=e360d7d0aa230cc2725657ecf10f35e80b385d318c39aeb49d811345dc0957df type=CONTAINER_STARTED_EVENT Mar 3 13:57:39.638326 containerd[1567]: time="2026-03-03T13:57:39.638275562Z" level=warning msg="container event discarded" container=ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1 type=CONTAINER_CREATED_EVENT Mar 3 13:57:39.638326 containerd[1567]: time="2026-03-03T13:57:39.638291331Z" level=warning msg="container event discarded" container=ea04a224629e2a4daac8afc116498a2fca6ab92c1971068124567047b3f9edb1 type=CONTAINER_STARTED_EVENT Mar 3 13:57:39.683723 containerd[1567]: time="2026-03-03T13:57:39.681616330Z" level=warning msg="container event discarded" container=48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0 type=CONTAINER_CREATED_EVENT Mar 3 13:57:39.717782 containerd[1567]: time="2026-03-03T13:57:39.717637050Z" level=warning msg="container event discarded" container=ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3 type=CONTAINER_CREATED_EVENT Mar 3 13:57:39.717782 containerd[1567]: time="2026-03-03T13:57:39.717755862Z" level=warning msg="container event discarded" container=ff125fdbe2484f67ce2f8247a2d23b2d4c3b6fd160228426d587f21784fce0b3 type=CONTAINER_STARTED_EVENT Mar 3 13:57:39.733789 containerd[1567]: time="2026-03-03T13:57:39.733608598Z" level=warning msg="container event discarded" container=e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782 type=CONTAINER_CREATED_EVENT Mar 3 13:57:39.889680 containerd[1567]: time="2026-03-03T13:57:39.889176237Z" level=warning msg="container event discarded" container=d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221 type=CONTAINER_CREATED_EVENT Mar 3 13:57:39.954430 systemd[1]: Started sshd@27-10.0.0.111:22-10.0.0.1:33122.service - OpenSSH per-connection server daemon (10.0.0.1:33122). Mar 3 13:57:40.108627 containerd[1567]: time="2026-03-03T13:57:40.108405248Z" level=warning msg="container event discarded" container=e739e58c7e6838a85c909bc2ed6107a501c974a8975692847619a6ad73a7d782 type=CONTAINER_STARTED_EVENT Mar 3 13:57:40.158721 sshd[6546]: Accepted publickey for core from 10.0.0.1 port 33122 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:40.158569 sshd-session[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:40.188429 systemd-logind[1542]: New session 28 of user core. Mar 3 13:57:40.190702 containerd[1567]: time="2026-03-03T13:57:40.190555230Z" level=warning msg="container event discarded" container=48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0 type=CONTAINER_STARTED_EVENT Mar 3 13:57:40.201110 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 3 13:57:40.495560 containerd[1567]: time="2026-03-03T13:57:40.495371726Z" level=warning msg="container event discarded" container=d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221 type=CONTAINER_STARTED_EVENT Mar 3 13:57:40.598684 sshd[6549]: Connection closed by 10.0.0.1 port 33122 Mar 3 13:57:40.600413 sshd-session[6546]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:40.633767 systemd[1]: sshd@27-10.0.0.111:22-10.0.0.1:33122.service: Deactivated successfully. Mar 3 13:57:40.638194 systemd[1]: session-28.scope: Deactivated successfully. Mar 3 13:57:40.650384 systemd-logind[1542]: Session 28 logged out. Waiting for processes to exit. Mar 3 13:57:40.658090 systemd-logind[1542]: Removed session 28. Mar 3 13:57:45.667768 systemd[1]: Started sshd@28-10.0.0.111:22-10.0.0.1:54518.service - OpenSSH per-connection server daemon (10.0.0.1:54518). Mar 3 13:57:45.871736 sshd[6563]: Accepted publickey for core from 10.0.0.1 port 54518 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:45.881484 sshd-session[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:45.910863 systemd-logind[1542]: New session 29 of user core. Mar 3 13:57:45.949557 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 3 13:57:46.460088 sshd[6572]: Connection closed by 10.0.0.1 port 54518 Mar 3 13:57:46.459198 sshd-session[6563]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:46.483806 systemd[1]: sshd@28-10.0.0.111:22-10.0.0.1:54518.service: Deactivated successfully. Mar 3 13:57:46.487838 systemd-logind[1542]: Session 29 logged out. Waiting for processes to exit. Mar 3 13:57:46.491040 systemd[1]: session-29.scope: Deactivated successfully. Mar 3 13:57:46.497058 systemd-logind[1542]: Removed session 29. Mar 3 13:57:47.066881 update_engine[1546]: I20260303 13:57:47.066771 1546 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 3 13:57:47.067592 update_engine[1546]: I20260303 13:57:47.066882 1546 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.076078 1546 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.077538 1546 omaha_request_params.cc:62] Current group set to stable Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.079269 1546 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.079294 1546 update_attempter.cc:643] Scheduling an action processor start. Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.079320 1546 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.079402 1546 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.082350 1546 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.082378 1546 omaha_request_action.cc:272] Request: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: Mar 3 13:57:47.083078 update_engine[1546]: I20260303 13:57:47.082392 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 13:57:47.146666 update_engine[1546]: I20260303 13:57:47.117694 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 13:57:47.148003 update_engine[1546]: I20260303 13:57:47.147441 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 13:57:47.155863 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 3 13:57:47.172751 update_engine[1546]: E20260303 13:57:47.172547 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 13:57:47.173053 update_engine[1546]: I20260303 13:57:47.172760 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 3 13:57:51.500732 systemd[1]: Started sshd@29-10.0.0.111:22-10.0.0.1:38508.service - OpenSSH per-connection server daemon (10.0.0.1:38508). Mar 3 13:57:51.635109 sshd[6594]: Accepted publickey for core from 10.0.0.1 port 38508 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:51.640264 sshd-session[6594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:51.652850 systemd-logind[1542]: New session 30 of user core. Mar 3 13:57:51.677807 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 3 13:57:52.033886 sshd[6609]: Connection closed by 10.0.0.1 port 38508 Mar 3 13:57:52.034528 sshd-session[6594]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:52.045598 systemd[1]: sshd@29-10.0.0.111:22-10.0.0.1:38508.service: Deactivated successfully. Mar 3 13:57:52.049955 systemd[1]: session-30.scope: Deactivated successfully. Mar 3 13:57:52.056600 systemd-logind[1542]: Session 30 logged out. Waiting for processes to exit. Mar 3 13:57:52.059618 systemd-logind[1542]: Removed session 30. Mar 3 13:57:55.278961 kubelet[2857]: E0303 13:57:55.278796 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:57:57.072496 systemd[1]: Started sshd@30-10.0.0.111:22-10.0.0.1:38512.service - OpenSSH per-connection server daemon (10.0.0.1:38512). Mar 3 13:57:57.271331 sshd[6685]: Accepted publickey for core from 10.0.0.1 port 38512 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:57:57.278197 sshd-session[6685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:57:57.301173 systemd-logind[1542]: New session 31 of user core. Mar 3 13:57:57.340173 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 3 13:57:57.700874 sshd[6690]: Connection closed by 10.0.0.1 port 38512 Mar 3 13:57:57.704197 sshd-session[6685]: pam_unix(sshd:session): session closed for user core Mar 3 13:57:57.719592 systemd[1]: sshd@30-10.0.0.111:22-10.0.0.1:38512.service: Deactivated successfully. Mar 3 13:57:57.723781 systemd[1]: session-31.scope: Deactivated successfully. Mar 3 13:57:57.725821 systemd-logind[1542]: Session 31 logged out. Waiting for processes to exit. Mar 3 13:57:57.730569 systemd-logind[1542]: Removed session 31. Mar 3 13:57:57.910606 update_engine[1546]: I20260303 13:57:57.910243 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 13:57:57.914489 update_engine[1546]: I20260303 13:57:57.910405 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 13:57:57.927882 update_engine[1546]: I20260303 13:57:57.927517 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 13:57:57.937009 update_engine[1546]: E20260303 13:57:57.935870 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 13:57:57.937167 update_engine[1546]: I20260303 13:57:57.937069 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 3 13:58:00.277293 kubelet[2857]: E0303 13:58:00.274641 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:01.432113 containerd[1567]: time="2026-03-03T13:58:01.431471410Z" level=warning msg="container event discarded" container=2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c type=CONTAINER_CREATED_EVENT Mar 3 13:58:01.432113 containerd[1567]: time="2026-03-03T13:58:01.431570054Z" level=warning msg="container event discarded" container=2894e0315d5f26d45cdb4015023bc5060f884c7862bf9d2d4d7c8547088a1d7c type=CONTAINER_STARTED_EVENT Mar 3 13:58:01.895349 containerd[1567]: time="2026-03-03T13:58:01.895276905Z" level=warning msg="container event discarded" container=07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00 type=CONTAINER_CREATED_EVENT Mar 3 13:58:02.634850 containerd[1567]: time="2026-03-03T13:58:02.634301455Z" level=warning msg="container event discarded" container=a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3 type=CONTAINER_CREATED_EVENT Mar 3 13:58:02.634850 containerd[1567]: time="2026-03-03T13:58:02.634409407Z" level=warning msg="container event discarded" container=a7a69746a397888d7eee5d0663206ac24557f64ceaab79584218f8c77414c6b3 type=CONTAINER_STARTED_EVENT Mar 3 13:58:02.634850 containerd[1567]: time="2026-03-03T13:58:02.634427550Z" level=warning msg="container event discarded" container=07458609b3b0fe4adc8d56fdf7de35150646b148bfb611c144a7802d6c3a4e00 type=CONTAINER_STARTED_EVENT Mar 3 13:58:02.750855 systemd[1]: Started sshd@31-10.0.0.111:22-10.0.0.1:48208.service - OpenSSH per-connection server daemon (10.0.0.1:48208). Mar 3 13:58:03.064604 sshd[6729]: Accepted publickey for core from 10.0.0.1 port 48208 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:03.070257 sshd-session[6729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:03.087306 systemd-logind[1542]: New session 32 of user core. Mar 3 13:58:03.096546 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 3 13:58:03.516686 sshd[6732]: Connection closed by 10.0.0.1 port 48208 Mar 3 13:58:03.517461 sshd-session[6729]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:03.534321 systemd[1]: sshd@31-10.0.0.111:22-10.0.0.1:48208.service: Deactivated successfully. Mar 3 13:58:03.540856 systemd[1]: session-32.scope: Deactivated successfully. Mar 3 13:58:03.546304 systemd-logind[1542]: Session 32 logged out. Waiting for processes to exit. Mar 3 13:58:03.558201 systemd-logind[1542]: Removed session 32. Mar 3 13:58:07.913386 update_engine[1546]: I20260303 13:58:07.912065 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 13:58:07.913386 update_engine[1546]: I20260303 13:58:07.912242 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 13:58:07.913386 update_engine[1546]: I20260303 13:58:07.912884 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 13:58:07.960280 update_engine[1546]: E20260303 13:58:07.958299 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 13:58:07.960280 update_engine[1546]: I20260303 13:58:07.958420 1546 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 3 13:58:08.581509 systemd[1]: Started sshd@32-10.0.0.111:22-10.0.0.1:48216.service - OpenSSH per-connection server daemon (10.0.0.1:48216). Mar 3 13:58:08.773393 sshd[6776]: Accepted publickey for core from 10.0.0.1 port 48216 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:08.774360 sshd-session[6776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:08.827059 systemd-logind[1542]: New session 33 of user core. Mar 3 13:58:08.843386 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 3 13:58:09.196342 sshd[6779]: Connection closed by 10.0.0.1 port 48216 Mar 3 13:58:09.196756 sshd-session[6776]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:09.206036 systemd[1]: sshd@32-10.0.0.111:22-10.0.0.1:48216.service: Deactivated successfully. Mar 3 13:58:09.216101 systemd[1]: session-33.scope: Deactivated successfully. Mar 3 13:58:09.222338 systemd-logind[1542]: Session 33 logged out. Waiting for processes to exit. Mar 3 13:58:09.226095 systemd-logind[1542]: Removed session 33. Mar 3 13:58:09.278051 kubelet[2857]: E0303 13:58:09.277473 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:14.229521 systemd[1]: Started sshd@33-10.0.0.111:22-10.0.0.1:57696.service - OpenSSH per-connection server daemon (10.0.0.1:57696). Mar 3 13:58:14.278131 kubelet[2857]: E0303 13:58:14.275220 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:14.328856 sshd[6822]: Accepted publickey for core from 10.0.0.1 port 57696 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:14.335211 sshd-session[6822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:14.354016 systemd-logind[1542]: New session 34 of user core. Mar 3 13:58:14.365585 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 3 13:58:14.750121 sshd[6825]: Connection closed by 10.0.0.1 port 57696 Mar 3 13:58:14.751349 sshd-session[6822]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:14.757142 containerd[1567]: time="2026-03-03T13:58:14.756780388Z" level=warning msg="container event discarded" container=29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf type=CONTAINER_CREATED_EVENT Mar 3 13:58:14.765733 systemd[1]: sshd@33-10.0.0.111:22-10.0.0.1:57696.service: Deactivated successfully. Mar 3 13:58:14.776625 systemd[1]: session-34.scope: Deactivated successfully. Mar 3 13:58:14.781150 systemd-logind[1542]: Session 34 logged out. Waiting for processes to exit. Mar 3 13:58:14.783695 systemd-logind[1542]: Removed session 34. Mar 3 13:58:15.205278 containerd[1567]: time="2026-03-03T13:58:15.205022854Z" level=warning msg="container event discarded" container=29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf type=CONTAINER_STARTED_EVENT Mar 3 13:58:17.911213 update_engine[1546]: I20260303 13:58:17.910537 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 13:58:17.911213 update_engine[1546]: I20260303 13:58:17.910660 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 13:58:17.912694 update_engine[1546]: I20260303 13:58:17.912559 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 13:58:17.941658 update_engine[1546]: E20260303 13:58:17.938398 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 13:58:17.941658 update_engine[1546]: I20260303 13:58:17.939778 1546 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 3 13:58:17.941864 update_engine[1546]: I20260303 13:58:17.941700 1546 omaha_request_action.cc:617] Omaha request response: Mar 3 13:58:17.941864 update_engine[1546]: E20260303 13:58:17.941838 1546 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 3 13:58:17.942595 update_engine[1546]: I20260303 13:58:17.941869 1546 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 3 13:58:17.942595 update_engine[1546]: I20260303 13:58:17.941879 1546 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 3 13:58:17.942595 update_engine[1546]: I20260303 13:58:17.941890 1546 update_attempter.cc:306] Processing Done. Mar 3 13:58:17.947751 update_engine[1546]: E20260303 13:58:17.946836 1546 update_attempter.cc:619] Update failed. Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947417 1546 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947445 1546 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947459 1546 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947571 1546 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947623 1546 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947635 1546 omaha_request_action.cc:272] Request: Mar 3 13:58:17.947751 update_engine[1546]: Mar 3 13:58:17.947751 update_engine[1546]: Mar 3 13:58:17.947751 update_engine[1546]: Mar 3 13:58:17.947751 update_engine[1546]: Mar 3 13:58:17.947751 update_engine[1546]: Mar 3 13:58:17.947751 update_engine[1546]: Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947646 1546 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 3 13:58:17.947751 update_engine[1546]: I20260303 13:58:17.947695 1546 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 3 13:58:17.949698 update_engine[1546]: I20260303 13:58:17.949616 1546 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 3 13:58:17.950342 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 3 13:58:17.973395 update_engine[1546]: E20260303 13:58:17.972122 1546 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972285 1546 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972311 1546 omaha_request_action.cc:617] Omaha request response: Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972326 1546 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972338 1546 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972348 1546 update_attempter.cc:306] Processing Done. Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972359 1546 update_attempter.cc:310] Error event sent. Mar 3 13:58:17.973395 update_engine[1546]: I20260303 13:58:17.972378 1546 update_check_scheduler.cc:74] Next update check in 40m37s Mar 3 13:58:17.975237 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 3 13:58:19.800510 systemd[1]: Started sshd@34-10.0.0.111:22-10.0.0.1:57702.service - OpenSSH per-connection server daemon (10.0.0.1:57702). Mar 3 13:58:19.909768 sshd[6839]: Accepted publickey for core from 10.0.0.1 port 57702 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:19.928799 sshd-session[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:19.953197 systemd-logind[1542]: New session 35 of user core. Mar 3 13:58:19.968325 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 3 13:58:20.224364 containerd[1567]: time="2026-03-03T13:58:20.219620073Z" level=warning msg="container event discarded" container=29fa3a7d483b303382e55f9e1742caadc5372572b07dac53e8c7b463d0978bcf type=CONTAINER_STOPPED_EVENT Mar 3 13:58:20.358137 sshd[6842]: Connection closed by 10.0.0.1 port 57702 Mar 3 13:58:20.359218 sshd-session[6839]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:20.382763 systemd[1]: sshd@34-10.0.0.111:22-10.0.0.1:57702.service: Deactivated successfully. Mar 3 13:58:20.390608 systemd[1]: session-35.scope: Deactivated successfully. Mar 3 13:58:20.395162 systemd-logind[1542]: Session 35 logged out. Waiting for processes to exit. Mar 3 13:58:20.401409 systemd-logind[1542]: Removed session 35. Mar 3 13:58:20.409500 containerd[1567]: time="2026-03-03T13:58:20.409130626Z" level=warning msg="container event discarded" container=9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d type=CONTAINER_CREATED_EVENT Mar 3 13:58:20.828787 containerd[1567]: time="2026-03-03T13:58:20.828709957Z" level=warning msg="container event discarded" container=9370aabed52cc814196b05199ffc2dfe10eee4f1c1926be15708c0f1437c107d type=CONTAINER_STARTED_EVENT Mar 3 13:58:25.298845 kubelet[2857]: E0303 13:58:25.298036 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:25.405795 systemd[1]: Started sshd@35-10.0.0.111:22-10.0.0.1:41926.service - OpenSSH per-connection server daemon (10.0.0.1:41926). Mar 3 13:58:25.596701 sshd[6877]: Accepted publickey for core from 10.0.0.1 port 41926 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:25.599143 sshd-session[6877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:25.636755 systemd-logind[1542]: New session 36 of user core. Mar 3 13:58:25.654282 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 3 13:58:26.043438 sshd[6880]: Connection closed by 10.0.0.1 port 41926 Mar 3 13:58:26.048614 sshd-session[6877]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:26.245574 systemd[1]: sshd@35-10.0.0.111:22-10.0.0.1:41926.service: Deactivated successfully. Mar 3 13:58:26.420520 systemd[1]: session-36.scope: Deactivated successfully. Mar 3 13:58:26.533126 systemd-logind[1542]: Session 36 logged out. Waiting for processes to exit. Mar 3 13:58:26.537737 systemd-logind[1542]: Removed session 36. Mar 3 13:58:31.078080 systemd[1]: Started sshd@36-10.0.0.111:22-10.0.0.1:35346.service - OpenSSH per-connection server daemon (10.0.0.1:35346). Mar 3 13:58:31.224449 sshd[6919]: Accepted publickey for core from 10.0.0.1 port 35346 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:31.225865 sshd-session[6919]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:31.264294 systemd-logind[1542]: New session 37 of user core. Mar 3 13:58:31.293495 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 3 13:58:31.605017 sshd[6922]: Connection closed by 10.0.0.1 port 35346 Mar 3 13:58:31.607552 sshd-session[6919]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:31.639305 systemd[1]: sshd@36-10.0.0.111:22-10.0.0.1:35346.service: Deactivated successfully. Mar 3 13:58:31.642822 systemd[1]: session-37.scope: Deactivated successfully. Mar 3 13:58:31.647121 systemd-logind[1542]: Session 37 logged out. Waiting for processes to exit. Mar 3 13:58:31.650773 systemd-logind[1542]: Removed session 37. Mar 3 13:58:36.651851 systemd[1]: Started sshd@37-10.0.0.111:22-10.0.0.1:35356.service - OpenSSH per-connection server daemon (10.0.0.1:35356). Mar 3 13:58:36.827381 sshd[6938]: Accepted publickey for core from 10.0.0.1 port 35356 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:36.830611 sshd-session[6938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:36.855710 systemd-logind[1542]: New session 38 of user core. Mar 3 13:58:36.881229 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 3 13:58:37.358363 sshd[6941]: Connection closed by 10.0.0.1 port 35356 Mar 3 13:58:37.358871 sshd-session[6938]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:37.376263 systemd[1]: sshd@37-10.0.0.111:22-10.0.0.1:35356.service: Deactivated successfully. Mar 3 13:58:37.382457 systemd[1]: session-38.scope: Deactivated successfully. Mar 3 13:58:37.387363 systemd-logind[1542]: Session 38 logged out. Waiting for processes to exit. Mar 3 13:58:37.393058 systemd-logind[1542]: Removed session 38. Mar 3 13:58:42.405432 systemd[1]: Started sshd@38-10.0.0.111:22-10.0.0.1:57444.service - OpenSSH per-connection server daemon (10.0.0.1:57444). Mar 3 13:58:42.621370 sshd[6981]: Accepted publickey for core from 10.0.0.1 port 57444 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:42.624259 sshd-session[6981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:42.652739 systemd-logind[1542]: New session 39 of user core. Mar 3 13:58:42.668484 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 3 13:58:42.991858 sshd[6985]: Connection closed by 10.0.0.1 port 57444 Mar 3 13:58:42.993489 sshd-session[6981]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:43.006836 systemd[1]: sshd@38-10.0.0.111:22-10.0.0.1:57444.service: Deactivated successfully. Mar 3 13:58:43.034759 systemd[1]: session-39.scope: Deactivated successfully. Mar 3 13:58:43.039889 systemd-logind[1542]: Session 39 logged out. Waiting for processes to exit. Mar 3 13:58:43.052536 systemd-logind[1542]: Removed session 39. Mar 3 13:58:44.277588 kubelet[2857]: E0303 13:58:44.275217 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:48.059254 systemd[1]: Started sshd@39-10.0.0.111:22-10.0.0.1:57448.service - OpenSSH per-connection server daemon (10.0.0.1:57448). Mar 3 13:58:48.270093 sshd[6999]: Accepted publickey for core from 10.0.0.1 port 57448 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:48.273455 sshd-session[6999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:48.301062 systemd-logind[1542]: New session 40 of user core. Mar 3 13:58:48.334356 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 3 13:58:48.783116 sshd[7002]: Connection closed by 10.0.0.1 port 57448 Mar 3 13:58:48.780273 sshd-session[6999]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:48.800056 systemd-logind[1542]: Session 40 logged out. Waiting for processes to exit. Mar 3 13:58:48.801790 systemd[1]: sshd@39-10.0.0.111:22-10.0.0.1:57448.service: Deactivated successfully. Mar 3 13:58:48.807713 systemd[1]: session-40.scope: Deactivated successfully. Mar 3 13:58:48.831478 systemd-logind[1542]: Removed session 40. Mar 3 13:58:51.069457 containerd[1567]: time="2026-03-03T13:58:51.069320306Z" level=warning msg="container event discarded" container=48d32eed61f8f1d88d106aa9ff5b50badc46d3ff726d2a1e5e15f6193d63d3e0 type=CONTAINER_STOPPED_EVENT Mar 3 13:58:51.145554 containerd[1567]: time="2026-03-03T13:58:51.141873654Z" level=warning msg="container event discarded" container=d8621edeba6913d8b377d210fbcd6e8f427bde13be7698a3e0a872ae23fae221 type=CONTAINER_STOPPED_EVENT Mar 3 13:58:51.983383 containerd[1567]: time="2026-03-03T13:58:51.983224769Z" level=warning msg="container event discarded" container=f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b type=CONTAINER_CREATED_EVENT Mar 3 13:58:51.996821 containerd[1567]: time="2026-03-03T13:58:51.996751294Z" level=warning msg="container event discarded" container=cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0 type=CONTAINER_CREATED_EVENT Mar 3 13:58:53.443209 containerd[1567]: time="2026-03-03T13:58:53.443126896Z" level=warning msg="container event discarded" container=f72100062297ea7ad58d05d5672d0d17662537af57737f0dea57689387b52f7b type=CONTAINER_STARTED_EVENT Mar 3 13:58:53.552872 containerd[1567]: time="2026-03-03T13:58:53.552787117Z" level=warning msg="container event discarded" container=cff787bc441989c9d968725edcb873da5f09c00b65ac21f390118201887aaba0 type=CONTAINER_STARTED_EVENT Mar 3 13:58:53.826305 systemd[1]: Started sshd@40-10.0.0.111:22-10.0.0.1:39240.service - OpenSSH per-connection server daemon (10.0.0.1:39240). Mar 3 13:58:54.004541 sshd[7040]: Accepted publickey for core from 10.0.0.1 port 39240 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:54.007306 sshd-session[7040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:54.037876 systemd-logind[1542]: New session 41 of user core. Mar 3 13:58:54.060840 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 3 13:58:54.382379 sshd[7043]: Connection closed by 10.0.0.1 port 39240 Mar 3 13:58:54.381837 sshd-session[7040]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:54.401739 systemd[1]: sshd@40-10.0.0.111:22-10.0.0.1:39240.service: Deactivated successfully. Mar 3 13:58:54.411247 systemd[1]: session-41.scope: Deactivated successfully. Mar 3 13:58:54.429453 systemd-logind[1542]: Session 41 logged out. Waiting for processes to exit. Mar 3 13:58:54.437780 systemd-logind[1542]: Removed session 41. Mar 3 13:58:56.280580 kubelet[2857]: E0303 13:58:56.277797 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:58.277804 kubelet[2857]: E0303 13:58:58.277693 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:58:59.414223 systemd[1]: Started sshd@41-10.0.0.111:22-10.0.0.1:39254.service - OpenSSH per-connection server daemon (10.0.0.1:39254). Mar 3 13:58:59.545766 sshd[7132]: Accepted publickey for core from 10.0.0.1 port 39254 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:58:59.549205 sshd-session[7132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:58:59.571449 systemd-logind[1542]: New session 42 of user core. Mar 3 13:58:59.588398 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 3 13:58:59.832887 sshd[7136]: Connection closed by 10.0.0.1 port 39254 Mar 3 13:58:59.835260 sshd-session[7132]: pam_unix(sshd:session): session closed for user core Mar 3 13:58:59.846439 systemd[1]: sshd@41-10.0.0.111:22-10.0.0.1:39254.service: Deactivated successfully. Mar 3 13:58:59.849365 systemd[1]: session-42.scope: Deactivated successfully. Mar 3 13:58:59.851273 systemd-logind[1542]: Session 42 logged out. Waiting for processes to exit. Mar 3 13:58:59.856818 systemd-logind[1542]: Removed session 42. Mar 3 13:59:04.860606 systemd[1]: Started sshd@42-10.0.0.111:22-10.0.0.1:44100.service - OpenSSH per-connection server daemon (10.0.0.1:44100). Mar 3 13:59:04.950866 sshd[7153]: Accepted publickey for core from 10.0.0.1 port 44100 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:04.953763 sshd-session[7153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:04.974285 systemd-logind[1542]: New session 43 of user core. Mar 3 13:59:04.980330 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 3 13:59:05.164432 sshd[7156]: Connection closed by 10.0.0.1 port 44100 Mar 3 13:59:05.164841 sshd-session[7153]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:05.172189 systemd[1]: sshd@42-10.0.0.111:22-10.0.0.1:44100.service: Deactivated successfully. Mar 3 13:59:05.176753 systemd[1]: session-43.scope: Deactivated successfully. Mar 3 13:59:05.183250 systemd-logind[1542]: Session 43 logged out. Waiting for processes to exit. Mar 3 13:59:05.185630 systemd-logind[1542]: Removed session 43. Mar 3 13:59:10.209386 systemd[1]: Started sshd@43-10.0.0.111:22-10.0.0.1:41274.service - OpenSSH per-connection server daemon (10.0.0.1:41274). Mar 3 13:59:10.330979 sshd[7216]: Accepted publickey for core from 10.0.0.1 port 41274 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:10.336795 sshd-session[7216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:10.352622 systemd-logind[1542]: New session 44 of user core. Mar 3 13:59:10.362030 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 3 13:59:10.570334 sshd[7219]: Connection closed by 10.0.0.1 port 41274 Mar 3 13:59:10.571507 sshd-session[7216]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:10.583848 systemd[1]: sshd@43-10.0.0.111:22-10.0.0.1:41274.service: Deactivated successfully. Mar 3 13:59:10.587678 systemd[1]: session-44.scope: Deactivated successfully. Mar 3 13:59:10.590665 systemd-logind[1542]: Session 44 logged out. Waiting for processes to exit. Mar 3 13:59:10.593985 systemd-logind[1542]: Removed session 44. Mar 3 13:59:14.282153 kubelet[2857]: E0303 13:59:14.277040 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:59:15.604992 systemd[1]: Started sshd@44-10.0.0.111:22-10.0.0.1:41290.service - OpenSSH per-connection server daemon (10.0.0.1:41290). Mar 3 13:59:15.791366 sshd[7234]: Accepted publickey for core from 10.0.0.1 port 41290 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:15.804528 sshd-session[7234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:15.856111 systemd-logind[1542]: New session 45 of user core. Mar 3 13:59:15.882219 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 3 13:59:16.486039 sshd[7237]: Connection closed by 10.0.0.1 port 41290 Mar 3 13:59:16.491260 sshd-session[7234]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:16.500824 containerd[1567]: time="2026-03-03T13:59:16.500020026Z" level=warning msg="container event discarded" container=7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46 type=CONTAINER_CREATED_EVENT Mar 3 13:59:16.505595 containerd[1567]: time="2026-03-03T13:59:16.505429085Z" level=warning msg="container event discarded" container=7bc057c4fa6fc8658183e897e4d7b53fb5c445181ef57e007505cedb48cbee46 type=CONTAINER_STARTED_EVENT Mar 3 13:59:16.551368 systemd[1]: sshd@44-10.0.0.111:22-10.0.0.1:41290.service: Deactivated successfully. Mar 3 13:59:16.571168 systemd[1]: session-45.scope: Deactivated successfully. Mar 3 13:59:16.586479 systemd-logind[1542]: Session 45 logged out. Waiting for processes to exit. Mar 3 13:59:16.602581 systemd[1]: Started sshd@45-10.0.0.111:22-10.0.0.1:41302.service - OpenSSH per-connection server daemon (10.0.0.1:41302). Mar 3 13:59:16.640355 systemd-logind[1542]: Removed session 45. Mar 3 13:59:16.641043 containerd[1567]: time="2026-03-03T13:59:16.640987823Z" level=warning msg="container event discarded" container=517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9 type=CONTAINER_CREATED_EVENT Mar 3 13:59:16.641294 containerd[1567]: time="2026-03-03T13:59:16.641200650Z" level=warning msg="container event discarded" container=517d861c5f3931fe20e69f9dbd2fb614b9ea2ae103952127f2b514a51ff327b9 type=CONTAINER_STARTED_EVENT Mar 3 13:59:16.823259 sshd[7251]: Accepted publickey for core from 10.0.0.1 port 41302 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:16.829557 sshd-session[7251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:16.881292 systemd-logind[1542]: New session 46 of user core. Mar 3 13:59:16.895698 systemd[1]: Started session-46.scope - Session 46 of User core. Mar 3 13:59:17.291473 kubelet[2857]: E0303 13:59:17.290869 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:59:17.575984 sshd[7254]: Connection closed by 10.0.0.1 port 41302 Mar 3 13:59:17.577885 sshd-session[7251]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:17.613057 systemd[1]: sshd@45-10.0.0.111:22-10.0.0.1:41302.service: Deactivated successfully. Mar 3 13:59:17.620747 systemd[1]: session-46.scope: Deactivated successfully. Mar 3 13:59:17.647312 systemd-logind[1542]: Session 46 logged out. Waiting for processes to exit. Mar 3 13:59:17.658603 systemd[1]: Started sshd@46-10.0.0.111:22-10.0.0.1:41308.service - OpenSSH per-connection server daemon (10.0.0.1:41308). Mar 3 13:59:17.671141 systemd-logind[1542]: Removed session 46. Mar 3 13:59:17.834767 sshd[7266]: Accepted publickey for core from 10.0.0.1 port 41308 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:17.838807 sshd-session[7266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:17.870974 systemd-logind[1542]: New session 47 of user core. Mar 3 13:59:17.889739 systemd[1]: Started session-47.scope - Session 47 of User core. Mar 3 13:59:18.155605 containerd[1567]: time="2026-03-03T13:59:18.153623514Z" level=warning msg="container event discarded" container=b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607 type=CONTAINER_CREATED_EVENT Mar 3 13:59:18.475834 containerd[1567]: time="2026-03-03T13:59:18.475631168Z" level=warning msg="container event discarded" container=b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607 type=CONTAINER_STARTED_EVENT Mar 3 13:59:18.554781 sshd[7269]: Connection closed by 10.0.0.1 port 41308 Mar 3 13:59:18.556046 sshd-session[7266]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:18.575227 systemd[1]: sshd@46-10.0.0.111:22-10.0.0.1:41308.service: Deactivated successfully. Mar 3 13:59:18.583326 systemd[1]: session-47.scope: Deactivated successfully. Mar 3 13:59:18.597775 systemd-logind[1542]: Session 47 logged out. Waiting for processes to exit. Mar 3 13:59:18.611024 systemd-logind[1542]: Removed session 47. Mar 3 13:59:18.626587 containerd[1567]: time="2026-03-03T13:59:18.626527870Z" level=warning msg="container event discarded" container=b452b29dbb8e120baa2152d083c8a379d72adb3738b256bd170f8a7ff932d607 type=CONTAINER_STOPPED_EVENT Mar 3 13:59:21.878013 containerd[1567]: time="2026-03-03T13:59:21.877870880Z" level=warning msg="container event discarded" container=76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4 type=CONTAINER_CREATED_EVENT Mar 3 13:59:22.054022 containerd[1567]: time="2026-03-03T13:59:22.053640089Z" level=warning msg="container event discarded" container=76389116d29c6e47fa36b764eab9bfbde096ebf34b0a90e089972b169ac030e4 type=CONTAINER_STARTED_EVENT Mar 3 13:59:23.635066 systemd[1]: Started sshd@47-10.0.0.111:22-10.0.0.1:53312.service - OpenSSH per-connection server daemon (10.0.0.1:53312). Mar 3 13:59:23.840484 sshd[7304]: Accepted publickey for core from 10.0.0.1 port 53312 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:23.849238 sshd-session[7304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:23.883424 systemd-logind[1542]: New session 48 of user core. Mar 3 13:59:23.914158 systemd[1]: Started session-48.scope - Session 48 of User core. Mar 3 13:59:24.545323 sshd[7307]: Connection closed by 10.0.0.1 port 53312 Mar 3 13:59:24.544238 sshd-session[7304]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:24.581971 systemd[1]: sshd@47-10.0.0.111:22-10.0.0.1:53312.service: Deactivated successfully. Mar 3 13:59:24.605690 systemd[1]: session-48.scope: Deactivated successfully. Mar 3 13:59:24.628651 systemd-logind[1542]: Session 48 logged out. Waiting for processes to exit. Mar 3 13:59:24.647814 systemd-logind[1542]: Removed session 48. Mar 3 13:59:30.403433 systemd[1]: Started sshd@48-10.0.0.111:22-10.0.0.1:53316.service - OpenSSH per-connection server daemon (10.0.0.1:53316). Mar 3 13:59:31.453564 kubelet[2857]: E0303 13:59:31.443751 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.425s" Mar 3 13:59:32.902496 sshd[7320]: Accepted publickey for core from 10.0.0.1 port 53316 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:32.905291 sshd-session[7320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:33.655666 systemd[1]: Started session-49.scope - Session 49 of User core. Mar 3 13:59:33.704596 systemd-logind[1542]: New session 49 of user core. Mar 3 13:59:33.724081 kubelet[2857]: E0303 13:59:33.715817 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.253s" Mar 3 13:59:37.708376 sshd[7329]: Connection closed by 10.0.0.1 port 53316 Mar 3 13:59:37.778694 sshd-session[7320]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:38.026203 systemd[1]: sshd@48-10.0.0.111:22-10.0.0.1:53316.service: Deactivated successfully. Mar 3 13:59:38.173239 systemd[1]: session-49.scope: Deactivated successfully. Mar 3 13:59:38.200395 systemd-logind[1542]: Session 49 logged out. Waiting for processes to exit. Mar 3 13:59:38.306498 systemd-logind[1542]: Removed session 49. Mar 3 13:59:41.747436 containerd[1567]: time="2026-03-03T13:59:41.741584223Z" level=warning msg="container event discarded" container=2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919 type=CONTAINER_CREATED_EVENT Mar 3 13:59:41.936513 kubelet[2857]: E0303 13:59:41.935694 2857 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.642s" Mar 3 13:59:41.988247 containerd[1567]: time="2026-03-03T13:59:41.987738163Z" level=warning msg="container event discarded" container=2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919 type=CONTAINER_STARTED_EVENT Mar 3 13:59:42.355051 containerd[1567]: time="2026-03-03T13:59:42.352465815Z" level=warning msg="container event discarded" container=2df857b1930a7cb7102c8d96d1adc76a303f31d4bf7288df8a79a6d68d6a6919 type=CONTAINER_STOPPED_EVENT Mar 3 13:59:42.781101 systemd[1]: Started sshd@49-10.0.0.111:22-10.0.0.1:51290.service - OpenSSH per-connection server daemon (10.0.0.1:51290). Mar 3 13:59:43.242536 sshd[7397]: Accepted publickey for core from 10.0.0.1 port 51290 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:43.252856 sshd-session[7397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:43.304496 systemd-logind[1542]: New session 50 of user core. Mar 3 13:59:43.324406 systemd[1]: Started session-50.scope - Session 50 of User core. Mar 3 13:59:44.280097 kubelet[2857]: E0303 13:59:44.279715 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:59:44.574584 sshd[7407]: Connection closed by 10.0.0.1 port 51290 Mar 3 13:59:44.576264 sshd-session[7397]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:44.600695 systemd[1]: sshd@49-10.0.0.111:22-10.0.0.1:51290.service: Deactivated successfully. Mar 3 13:59:44.611587 systemd[1]: session-50.scope: Deactivated successfully. Mar 3 13:59:44.625499 systemd-logind[1542]: Session 50 logged out. Waiting for processes to exit. Mar 3 13:59:44.640247 systemd-logind[1542]: Removed session 50. Mar 3 13:59:49.643129 systemd[1]: Started sshd@50-10.0.0.111:22-10.0.0.1:51302.service - OpenSSH per-connection server daemon (10.0.0.1:51302). Mar 3 13:59:49.903027 sshd[7421]: Accepted publickey for core from 10.0.0.1 port 51302 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:49.911855 sshd-session[7421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:49.937815 systemd-logind[1542]: New session 51 of user core. Mar 3 13:59:49.964796 systemd[1]: Started session-51.scope - Session 51 of User core. Mar 3 13:59:50.612832 sshd[7424]: Connection closed by 10.0.0.1 port 51302 Mar 3 13:59:50.624389 sshd-session[7421]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:50.694642 systemd[1]: sshd@50-10.0.0.111:22-10.0.0.1:51302.service: Deactivated successfully. Mar 3 13:59:50.725557 systemd[1]: session-51.scope: Deactivated successfully. Mar 3 13:59:50.768995 systemd-logind[1542]: Session 51 logged out. Waiting for processes to exit. Mar 3 13:59:50.778115 systemd-logind[1542]: Removed session 51. Mar 3 13:59:51.855056 containerd[1567]: time="2026-03-03T13:59:51.854383963Z" level=warning msg="container event discarded" container=930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a type=CONTAINER_CREATED_EVENT Mar 3 13:59:52.410527 containerd[1567]: time="2026-03-03T13:59:52.410314759Z" level=warning msg="container event discarded" container=930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a type=CONTAINER_STARTED_EVENT Mar 3 13:59:54.275398 kubelet[2857]: E0303 13:59:54.274635 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 13:59:55.351431 containerd[1567]: time="2026-03-03T13:59:55.351120995Z" level=warning msg="container event discarded" container=930590f8db534e38dd1622fb6e16697d73d10f599b087970697a6bddecb43b1a type=CONTAINER_STOPPED_EVENT Mar 3 13:59:55.680684 systemd[1]: Started sshd@51-10.0.0.111:22-10.0.0.1:36430.service - OpenSSH per-connection server daemon (10.0.0.1:36430). Mar 3 13:59:55.817549 containerd[1567]: time="2026-03-03T13:59:55.807313151Z" level=warning msg="container event discarded" container=860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339 type=CONTAINER_CREATED_EVENT Mar 3 13:59:55.970120 sshd[7477]: Accepted publickey for core from 10.0.0.1 port 36430 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 13:59:55.984514 sshd-session[7477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 13:59:56.040868 systemd-logind[1542]: New session 52 of user core. Mar 3 13:59:56.060373 systemd[1]: Started session-52.scope - Session 52 of User core. Mar 3 13:59:56.490368 containerd[1567]: time="2026-03-03T13:59:56.487052991Z" level=warning msg="container event discarded" container=860b661fc08dbead18667d89dc691951363386cb183bdb11f95c6d58b8da4339 type=CONTAINER_STARTED_EVENT Mar 3 13:59:56.816015 sshd[7513]: Connection closed by 10.0.0.1 port 36430 Mar 3 13:59:56.826422 sshd-session[7477]: pam_unix(sshd:session): session closed for user core Mar 3 13:59:56.855680 systemd[1]: sshd@51-10.0.0.111:22-10.0.0.1:36430.service: Deactivated successfully. Mar 3 13:59:56.869746 systemd[1]: session-52.scope: Deactivated successfully. Mar 3 13:59:56.887109 systemd-logind[1542]: Session 52 logged out. Waiting for processes to exit. Mar 3 13:59:56.895769 systemd-logind[1542]: Removed session 52. Mar 3 14:00:01.032008 containerd[1567]: time="2026-03-03T14:00:01.031499249Z" level=warning msg="container event discarded" container=eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002 type=CONTAINER_CREATED_EVENT Mar 3 14:00:01.032008 containerd[1567]: time="2026-03-03T14:00:01.031563609Z" level=warning msg="container event discarded" container=eb47d0ad25ae573f306a2c1a2a449838f566115e7c674ebf28d844618e192002 type=CONTAINER_STARTED_EVENT Mar 3 14:00:01.889650 systemd[1]: Started sshd@52-10.0.0.111:22-10.0.0.1:38256.service - OpenSSH per-connection server daemon (10.0.0.1:38256). Mar 3 14:00:02.237843 sshd[7570]: Accepted publickey for core from 10.0.0.1 port 38256 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:02.235376 sshd-session[7570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:02.276427 systemd-logind[1542]: New session 53 of user core. Mar 3 14:00:02.310866 systemd[1]: Started session-53.scope - Session 53 of User core. Mar 3 14:00:03.106473 sshd[7573]: Connection closed by 10.0.0.1 port 38256 Mar 3 14:00:03.104619 sshd-session[7570]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:03.143654 systemd[1]: sshd@52-10.0.0.111:22-10.0.0.1:38256.service: Deactivated successfully. Mar 3 14:00:03.166503 systemd[1]: session-53.scope: Deactivated successfully. Mar 3 14:00:03.180087 systemd-logind[1542]: Session 53 logged out. Waiting for processes to exit. Mar 3 14:00:03.190544 systemd-logind[1542]: Removed session 53. Mar 3 14:00:03.823567 containerd[1567]: time="2026-03-03T14:00:03.817586034Z" level=warning msg="container event discarded" container=c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37 type=CONTAINER_CREATED_EVENT Mar 3 14:00:04.284773 kubelet[2857]: E0303 14:00:04.274742 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:04.849801 containerd[1567]: time="2026-03-03T14:00:04.849308678Z" level=warning msg="container event discarded" container=c1bc29e9803acd73dd74c9c1f6a123cff4cc0d4b0d14e32d76f54e7c16d20d37 type=CONTAINER_STARTED_EVENT Mar 3 14:00:08.194077 systemd[1]: Started sshd@53-10.0.0.111:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). Mar 3 14:00:08.276862 containerd[1567]: time="2026-03-03T14:00:08.275751939Z" level=warning msg="container event discarded" container=75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52 type=CONTAINER_CREATED_EVENT Mar 3 14:00:08.446977 sshd[7606]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:08.447462 sshd-session[7606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:08.496396 systemd-logind[1542]: New session 54 of user core. Mar 3 14:00:08.513342 systemd[1]: Started session-54.scope - Session 54 of User core. Mar 3 14:00:08.584730 containerd[1567]: time="2026-03-03T14:00:08.584484290Z" level=warning msg="container event discarded" container=19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a type=CONTAINER_CREATED_EVENT Mar 3 14:00:08.586450 containerd[1567]: time="2026-03-03T14:00:08.586072119Z" level=warning msg="container event discarded" container=19ec01a1ea95c26e4ef1595afc3d7e80795a2f8c82a61f5413942c647be5622a type=CONTAINER_STARTED_EVENT Mar 3 14:00:08.675609 containerd[1567]: time="2026-03-03T14:00:08.675520595Z" level=warning msg="container event discarded" container=75a039fdd8218d8e86d627200fc2289dd6211a772e00375189cfaabb4cf4ba52 type=CONTAINER_STARTED_EVENT Mar 3 14:00:09.100389 sshd[7616]: Connection closed by 10.0.0.1 port 38262 Mar 3 14:00:09.102866 sshd-session[7606]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:09.112295 systemd[1]: sshd@53-10.0.0.111:22-10.0.0.1:38262.service: Deactivated successfully. Mar 3 14:00:09.120684 systemd[1]: session-54.scope: Deactivated successfully. Mar 3 14:00:09.133691 systemd-logind[1542]: Session 54 logged out. Waiting for processes to exit. Mar 3 14:00:09.144167 systemd-logind[1542]: Removed session 54. Mar 3 14:00:09.276023 kubelet[2857]: E0303 14:00:09.275546 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:09.794634 containerd[1567]: time="2026-03-03T14:00:09.788118256Z" level=warning msg="container event discarded" container=1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48 type=CONTAINER_CREATED_EVENT Mar 3 14:00:09.794634 containerd[1567]: time="2026-03-03T14:00:09.790022929Z" level=warning msg="container event discarded" container=1d94f4a541c2db4a21b1811c004077ff73b56a8d976f62e44833af9112d39e48 type=CONTAINER_STARTED_EVENT Mar 3 14:00:10.933090 containerd[1567]: time="2026-03-03T14:00:10.932860227Z" level=warning msg="container event discarded" container=ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724 type=CONTAINER_CREATED_EVENT Mar 3 14:00:10.933090 containerd[1567]: time="2026-03-03T14:00:10.933044771Z" level=warning msg="container event discarded" container=ab507685634cac2489abe04f1bd4eaf7690b814c1561c32ca57bee8f15244724 type=CONTAINER_STARTED_EVENT Mar 3 14:00:11.448488 containerd[1567]: time="2026-03-03T14:00:11.448083009Z" level=warning msg="container event discarded" container=04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c type=CONTAINER_CREATED_EVENT Mar 3 14:00:11.552394 containerd[1567]: time="2026-03-03T14:00:11.552282718Z" level=warning msg="container event discarded" container=81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0 type=CONTAINER_CREATED_EVENT Mar 3 14:00:11.552394 containerd[1567]: time="2026-03-03T14:00:11.552352809Z" level=warning msg="container event discarded" container=81310b0330b0575a987622bb623a0e781c05327366f471ec5bad6fa3d4901ae0 type=CONTAINER_STARTED_EVENT Mar 3 14:00:11.667691 containerd[1567]: time="2026-03-03T14:00:11.667618937Z" level=warning msg="container event discarded" container=37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169 type=CONTAINER_CREATED_EVENT Mar 3 14:00:11.668073 containerd[1567]: time="2026-03-03T14:00:11.668024174Z" level=warning msg="container event discarded" container=37949956fee451cbdb318607c6ede0570cc31072b966d959cfa6c3f11ee6f169 type=CONTAINER_STARTED_EVENT Mar 3 14:00:11.933868 containerd[1567]: time="2026-03-03T14:00:11.933814589Z" level=warning msg="container event discarded" container=04a89c1eb35853c2142661577645d8bfa555ad3a91d7dadd0d69e25e562c9a0c type=CONTAINER_STARTED_EVENT Mar 3 14:00:12.318875 containerd[1567]: time="2026-03-03T14:00:12.318777052Z" level=warning msg="container event discarded" container=d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132 type=CONTAINER_CREATED_EVENT Mar 3 14:00:12.318875 containerd[1567]: time="2026-03-03T14:00:12.318835371Z" level=warning msg="container event discarded" container=d258be7115bbf3926fd6d942e4187e4f7d258053362d1aaa75c76ed669f99132 type=CONTAINER_STARTED_EVENT Mar 3 14:00:12.418758 containerd[1567]: time="2026-03-03T14:00:12.418493888Z" level=warning msg="container event discarded" container=c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113 type=CONTAINER_CREATED_EVENT Mar 3 14:00:12.987559 containerd[1567]: time="2026-03-03T14:00:12.987469964Z" level=warning msg="container event discarded" container=c53f3c2190833aa7017e12b2ae7296115282bde500aff48eb6f545cb4a5b3113 type=CONTAINER_STARTED_EVENT Mar 3 14:00:13.099022 containerd[1567]: time="2026-03-03T14:00:13.098815539Z" level=warning msg="container event discarded" container=1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c type=CONTAINER_CREATED_EVENT Mar 3 14:00:13.099022 containerd[1567]: time="2026-03-03T14:00:13.098860483Z" level=warning msg="container event discarded" container=1fef2389233d2761366929f31a67d34fba37586d181d73b963d0098aa1dd238c type=CONTAINER_STARTED_EVENT Mar 3 14:00:13.240879 containerd[1567]: time="2026-03-03T14:00:13.240505973Z" level=warning msg="container event discarded" container=1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e type=CONTAINER_CREATED_EVENT Mar 3 14:00:13.643758 containerd[1567]: time="2026-03-03T14:00:13.639196378Z" level=warning msg="container event discarded" container=1869b66e6fc63c9ba9e1927de7062ccec89e33862c957d80a15e5d38c4e95f1e type=CONTAINER_STARTED_EVENT Mar 3 14:00:14.246362 systemd[1]: Started sshd@54-10.0.0.111:22-10.0.0.1:58630.service - OpenSSH per-connection server daemon (10.0.0.1:58630). Mar 3 14:00:14.853024 sshd[7629]: Accepted publickey for core from 10.0.0.1 port 58630 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:14.860519 sshd-session[7629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:14.873847 systemd-logind[1542]: New session 55 of user core. Mar 3 14:00:14.908486 systemd[1]: Started session-55.scope - Session 55 of User core. Mar 3 14:00:15.848705 sshd[7632]: Connection closed by 10.0.0.1 port 58630 Mar 3 14:00:15.855762 sshd-session[7629]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:15.920398 systemd[1]: sshd@54-10.0.0.111:22-10.0.0.1:58630.service: Deactivated successfully. Mar 3 14:00:15.995742 systemd[1]: session-55.scope: Deactivated successfully. Mar 3 14:00:16.055167 systemd-logind[1542]: Session 55 logged out. Waiting for processes to exit. Mar 3 14:00:16.099040 systemd-logind[1542]: Removed session 55. Mar 3 14:00:16.477733 kubelet[2857]: E0303 14:00:16.438878 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:20.666718 containerd[1567]: time="2026-03-03T14:00:20.665047064Z" level=warning msg="container event discarded" container=0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05 type=CONTAINER_CREATED_EVENT Mar 3 14:00:20.939402 systemd[1]: Started sshd@55-10.0.0.111:22-10.0.0.1:41012.service - OpenSSH per-connection server daemon (10.0.0.1:41012). Mar 3 14:00:21.078102 containerd[1567]: time="2026-03-03T14:00:21.078018726Z" level=warning msg="container event discarded" container=0d38f0afe6c7afe6a4f6a8a357ab8adb089f6d1dc39535256628032238751d05 type=CONTAINER_STARTED_EVENT Mar 3 14:00:21.134769 sshd[7646]: Accepted publickey for core from 10.0.0.1 port 41012 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:21.142415 sshd-session[7646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:21.180574 systemd-logind[1542]: New session 56 of user core. Mar 3 14:00:21.202362 systemd[1]: Started session-56.scope - Session 56 of User core. Mar 3 14:00:21.843517 sshd[7649]: Connection closed by 10.0.0.1 port 41012 Mar 3 14:00:21.847088 sshd-session[7646]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:21.935146 systemd[1]: sshd@55-10.0.0.111:22-10.0.0.1:41012.service: Deactivated successfully. Mar 3 14:00:21.958042 systemd[1]: session-56.scope: Deactivated successfully. Mar 3 14:00:21.960705 systemd-logind[1542]: Session 56 logged out. Waiting for processes to exit. Mar 3 14:00:21.994747 systemd-logind[1542]: Removed session 56. Mar 3 14:00:26.278402 kubelet[2857]: E0303 14:00:26.278212 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:26.888702 systemd[1]: Started sshd@56-10.0.0.111:22-10.0.0.1:41018.service - OpenSSH per-connection server daemon (10.0.0.1:41018). Mar 3 14:00:27.019154 sshd[7697]: Accepted publickey for core from 10.0.0.1 port 41018 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:27.027689 sshd-session[7697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:27.049131 systemd-logind[1542]: New session 57 of user core. Mar 3 14:00:27.068809 systemd[1]: Started session-57.scope - Session 57 of User core. Mar 3 14:00:27.281634 kubelet[2857]: E0303 14:00:27.281170 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:00:27.400421 sshd[7700]: Connection closed by 10.0.0.1 port 41018 Mar 3 14:00:27.401659 sshd-session[7697]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:27.420184 systemd[1]: sshd@56-10.0.0.111:22-10.0.0.1:41018.service: Deactivated successfully. Mar 3 14:00:27.425600 systemd[1]: session-57.scope: Deactivated successfully. Mar 3 14:00:27.435418 systemd-logind[1542]: Session 57 logged out. Waiting for processes to exit. Mar 3 14:00:27.443558 systemd-logind[1542]: Removed session 57. Mar 3 14:00:27.694730 containerd[1567]: time="2026-03-03T14:00:27.694411159Z" level=warning msg="container event discarded" container=e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9 type=CONTAINER_CREATED_EVENT Mar 3 14:00:28.098636 containerd[1567]: time="2026-03-03T14:00:28.095590624Z" level=warning msg="container event discarded" container=e7cccf5117562abdffd7088a52acc37e98db7f950be9f8309a1b92e1a63fcda9 type=CONTAINER_STARTED_EVENT Mar 3 14:00:32.497433 systemd[1]: Started sshd@57-10.0.0.111:22-10.0.0.1:37450.service - OpenSSH per-connection server daemon (10.0.0.1:37450). Mar 3 14:00:32.758116 sshd[7740]: Accepted publickey for core from 10.0.0.1 port 37450 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:32.781177 sshd-session[7740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:32.816656 systemd-logind[1542]: New session 58 of user core. Mar 3 14:00:32.834307 systemd[1]: Started session-58.scope - Session 58 of User core. Mar 3 14:00:33.521333 sshd[7743]: Connection closed by 10.0.0.1 port 37450 Mar 3 14:00:33.520318 sshd-session[7740]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:33.547227 systemd[1]: sshd@57-10.0.0.111:22-10.0.0.1:37450.service: Deactivated successfully. Mar 3 14:00:33.557412 systemd[1]: session-58.scope: Deactivated successfully. Mar 3 14:00:33.569866 systemd-logind[1542]: Session 58 logged out. Waiting for processes to exit. Mar 3 14:00:33.593363 systemd-logind[1542]: Removed session 58. Mar 3 14:00:35.051119 containerd[1567]: time="2026-03-03T14:00:35.050471011Z" level=warning msg="container event discarded" container=3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335 type=CONTAINER_CREATED_EVENT Mar 3 14:00:35.396635 containerd[1567]: time="2026-03-03T14:00:35.396115582Z" level=warning msg="container event discarded" container=3a89499a9140c09dc450b567f19902c2c59fe4b2446c2b55dfe923095b9c6335 type=CONTAINER_STARTED_EVENT Mar 3 14:00:35.836106 containerd[1567]: time="2026-03-03T14:00:35.835410986Z" level=warning msg="container event discarded" container=98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1 type=CONTAINER_CREATED_EVENT Mar 3 14:00:36.270381 containerd[1567]: time="2026-03-03T14:00:36.270111261Z" level=warning msg="container event discarded" container=98bf9449442c8c0d507def6a4e5e6ac83d020bcf31e14e4628f224537713b1a1 type=CONTAINER_STARTED_EVENT Mar 3 14:00:38.502329 containerd[1567]: time="2026-03-03T14:00:38.502171776Z" level=warning msg="container event discarded" container=0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be type=CONTAINER_CREATED_EVENT Mar 3 14:00:38.580180 systemd[1]: Started sshd@58-10.0.0.111:22-10.0.0.1:37462.service - OpenSSH per-connection server daemon (10.0.0.1:37462). Mar 3 14:00:38.828395 sshd[7789]: Accepted publickey for core from 10.0.0.1 port 37462 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:38.836881 sshd-session[7789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:38.853616 containerd[1567]: time="2026-03-03T14:00:38.852155059Z" level=warning msg="container event discarded" container=0c8997b3113a824e252645974b82bdc3674b810cfcd5a358e14b2051dd9b02be type=CONTAINER_STARTED_EVENT Mar 3 14:00:38.864352 systemd-logind[1542]: New session 59 of user core. Mar 3 14:00:38.896231 systemd[1]: Started session-59.scope - Session 59 of User core. Mar 3 14:00:39.483139 sshd[7792]: Connection closed by 10.0.0.1 port 37462 Mar 3 14:00:39.484394 sshd-session[7789]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:39.516370 systemd[1]: sshd@58-10.0.0.111:22-10.0.0.1:37462.service: Deactivated successfully. Mar 3 14:00:39.535545 systemd[1]: session-59.scope: Deactivated successfully. Mar 3 14:00:39.548372 systemd-logind[1542]: Session 59 logged out. Waiting for processes to exit. Mar 3 14:00:39.559748 systemd-logind[1542]: Removed session 59. Mar 3 14:00:44.553234 systemd[1]: Started sshd@59-10.0.0.111:22-10.0.0.1:58906.service - OpenSSH per-connection server daemon (10.0.0.1:58906). Mar 3 14:00:44.723650 sshd[7806]: Accepted publickey for core from 10.0.0.1 port 58906 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:44.731467 sshd-session[7806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:44.763610 systemd-logind[1542]: New session 60 of user core. Mar 3 14:00:44.778033 systemd[1]: Started session-60.scope - Session 60 of User core. Mar 3 14:00:45.213551 sshd[7809]: Connection closed by 10.0.0.1 port 58906 Mar 3 14:00:45.214218 sshd-session[7806]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:45.225061 systemd[1]: sshd@59-10.0.0.111:22-10.0.0.1:58906.service: Deactivated successfully. Mar 3 14:00:45.231740 systemd[1]: session-60.scope: Deactivated successfully. Mar 3 14:00:45.237880 systemd-logind[1542]: Session 60 logged out. Waiting for processes to exit. Mar 3 14:00:45.243669 systemd-logind[1542]: Removed session 60. Mar 3 14:00:50.326821 systemd[1]: Started sshd@60-10.0.0.111:22-10.0.0.1:41240.service - OpenSSH per-connection server daemon (10.0.0.1:41240). Mar 3 14:00:50.636151 sshd[7824]: Accepted publickey for core from 10.0.0.1 port 41240 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:50.643202 sshd-session[7824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:50.725204 systemd-logind[1542]: New session 61 of user core. Mar 3 14:00:50.746048 systemd[1]: Started session-61.scope - Session 61 of User core. Mar 3 14:00:51.520052 sshd[7827]: Connection closed by 10.0.0.1 port 41240 Mar 3 14:00:51.532620 sshd-session[7824]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:51.575394 systemd[1]: Started sshd@61-10.0.0.111:22-10.0.0.1:41244.service - OpenSSH per-connection server daemon (10.0.0.1:41244). Mar 3 14:00:51.576507 systemd[1]: sshd@60-10.0.0.111:22-10.0.0.1:41240.service: Deactivated successfully. Mar 3 14:00:51.579825 systemd[1]: session-61.scope: Deactivated successfully. Mar 3 14:00:51.593681 systemd-logind[1542]: Session 61 logged out. Waiting for processes to exit. Mar 3 14:00:51.603544 systemd-logind[1542]: Removed session 61. Mar 3 14:00:51.771678 sshd[7837]: Accepted publickey for core from 10.0.0.1 port 41244 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:51.777169 sshd-session[7837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:51.834765 systemd-logind[1542]: New session 62 of user core. Mar 3 14:00:51.860141 systemd[1]: Started session-62.scope - Session 62 of User core. Mar 3 14:00:53.937095 sshd[7860]: Connection closed by 10.0.0.1 port 41244 Mar 3 14:00:53.968633 sshd-session[7837]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:54.009489 systemd[1]: Started sshd@62-10.0.0.111:22-10.0.0.1:41246.service - OpenSSH per-connection server daemon (10.0.0.1:41246). Mar 3 14:00:54.021076 systemd[1]: sshd@61-10.0.0.111:22-10.0.0.1:41244.service: Deactivated successfully. Mar 3 14:00:54.029811 systemd-logind[1542]: Session 62 logged out. Waiting for processes to exit. Mar 3 14:00:54.033202 systemd[1]: session-62.scope: Deactivated successfully. Mar 3 14:00:54.060655 systemd-logind[1542]: Removed session 62. Mar 3 14:00:54.828108 sshd[7873]: Accepted publickey for core from 10.0.0.1 port 41246 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:54.849690 sshd-session[7873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:54.893422 systemd-logind[1542]: New session 63 of user core. Mar 3 14:00:54.926346 systemd[1]: Started session-63.scope - Session 63 of User core. Mar 3 14:00:58.407419 sshd[7879]: Connection closed by 10.0.0.1 port 41246 Mar 3 14:00:58.413425 sshd-session[7873]: pam_unix(sshd:session): session closed for user core Mar 3 14:00:58.455873 systemd[1]: sshd@62-10.0.0.111:22-10.0.0.1:41246.service: Deactivated successfully. Mar 3 14:00:58.497162 systemd[1]: session-63.scope: Deactivated successfully. Mar 3 14:00:58.499614 systemd[1]: session-63.scope: Consumed 1.085s CPU time, 40.8M memory peak. Mar 3 14:00:58.518547 systemd-logind[1542]: Session 63 logged out. Waiting for processes to exit. Mar 3 14:00:58.528827 systemd[1]: Started sshd@63-10.0.0.111:22-10.0.0.1:41258.service - OpenSSH per-connection server daemon (10.0.0.1:41258). Mar 3 14:00:58.537500 systemd-logind[1542]: Removed session 63. Mar 3 14:00:58.929675 sshd[7952]: Accepted publickey for core from 10.0.0.1 port 41258 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:00:58.960741 sshd-session[7952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:00:59.029869 systemd-logind[1542]: New session 64 of user core. Mar 3 14:00:59.038841 systemd[1]: Started session-64.scope - Session 64 of User core. Mar 3 14:01:01.312575 kubelet[2857]: E0303 14:01:01.306556 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:01.504097 sshd[7981]: Connection closed by 10.0.0.1 port 41258 Mar 3 14:01:01.508339 sshd-session[7952]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:01.562182 systemd[1]: sshd@63-10.0.0.111:22-10.0.0.1:41258.service: Deactivated successfully. Mar 3 14:01:01.575753 systemd[1]: session-64.scope: Deactivated successfully. Mar 3 14:01:01.582263 systemd-logind[1542]: Session 64 logged out. Waiting for processes to exit. Mar 3 14:01:01.610161 systemd[1]: Started sshd@64-10.0.0.111:22-10.0.0.1:59462.service - OpenSSH per-connection server daemon (10.0.0.1:59462). Mar 3 14:01:01.705418 systemd-logind[1542]: Removed session 64. Mar 3 14:01:01.945416 sshd[7996]: Accepted publickey for core from 10.0.0.1 port 59462 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:01.962419 sshd-session[7996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:02.015665 systemd-logind[1542]: New session 65 of user core. Mar 3 14:01:02.040373 systemd[1]: Started session-65.scope - Session 65 of User core. Mar 3 14:01:02.557532 sshd[7999]: Connection closed by 10.0.0.1 port 59462 Mar 3 14:01:02.558716 sshd-session[7996]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:02.575385 systemd[1]: sshd@64-10.0.0.111:22-10.0.0.1:59462.service: Deactivated successfully. Mar 3 14:01:02.585776 systemd[1]: session-65.scope: Deactivated successfully. Mar 3 14:01:02.600049 systemd-logind[1542]: Session 65 logged out. Waiting for processes to exit. Mar 3 14:01:02.604792 systemd-logind[1542]: Removed session 65. Mar 3 14:01:06.275113 kubelet[2857]: E0303 14:01:06.275065 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:07.634484 systemd[1]: Started sshd@65-10.0.0.111:22-10.0.0.1:59476.service - OpenSSH per-connection server daemon (10.0.0.1:59476). Mar 3 14:01:07.903680 sshd[8015]: Accepted publickey for core from 10.0.0.1 port 59476 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:07.916212 sshd-session[8015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:07.998540 systemd-logind[1542]: New session 66 of user core. Mar 3 14:01:08.042128 systemd[1]: Started session-66.scope - Session 66 of User core. Mar 3 14:01:09.238121 sshd[8024]: Connection closed by 10.0.0.1 port 59476 Mar 3 14:01:09.241854 sshd-session[8015]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:09.303115 systemd[1]: sshd@65-10.0.0.111:22-10.0.0.1:59476.service: Deactivated successfully. Mar 3 14:01:09.313830 systemd[1]: session-66.scope: Deactivated successfully. Mar 3 14:01:09.328518 systemd-logind[1542]: Session 66 logged out. Waiting for processes to exit. Mar 3 14:01:09.368231 systemd-logind[1542]: Removed session 66. Mar 3 14:01:13.301705 kubelet[2857]: E0303 14:01:13.288655 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:14.337469 systemd[1]: Started sshd@66-10.0.0.111:22-10.0.0.1:48296.service - OpenSSH per-connection server daemon (10.0.0.1:48296). Mar 3 14:01:14.913249 sshd[8071]: Accepted publickey for core from 10.0.0.1 port 48296 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:14.953092 sshd-session[8071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:15.093485 systemd-logind[1542]: New session 67 of user core. Mar 3 14:01:15.150568 systemd[1]: Started session-67.scope - Session 67 of User core. Mar 3 14:01:16.709073 sshd[8074]: Connection closed by 10.0.0.1 port 48296 Mar 3 14:01:16.707627 sshd-session[8071]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:16.766622 systemd[1]: sshd@66-10.0.0.111:22-10.0.0.1:48296.service: Deactivated successfully. Mar 3 14:01:16.774812 systemd[1]: session-67.scope: Deactivated successfully. Mar 3 14:01:16.844445 systemd-logind[1542]: Session 67 logged out. Waiting for processes to exit. Mar 3 14:01:16.856881 systemd-logind[1542]: Removed session 67. Mar 3 14:01:21.850145 systemd[1]: Started sshd@67-10.0.0.111:22-10.0.0.1:46414.service - OpenSSH per-connection server daemon (10.0.0.1:46414). Mar 3 14:01:22.414801 sshd[8094]: Accepted publickey for core from 10.0.0.1 port 46414 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:22.439174 sshd-session[8094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:22.505672 systemd-logind[1542]: New session 68 of user core. Mar 3 14:01:22.529287 systemd[1]: Started session-68.scope - Session 68 of User core. Mar 3 14:01:23.412603 sshd[8115]: Connection closed by 10.0.0.1 port 46414 Mar 3 14:01:23.414768 sshd-session[8094]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:23.472309 systemd[1]: sshd@67-10.0.0.111:22-10.0.0.1:46414.service: Deactivated successfully. Mar 3 14:01:23.503575 systemd[1]: session-68.scope: Deactivated successfully. Mar 3 14:01:23.538887 systemd-logind[1542]: Session 68 logged out. Waiting for processes to exit. Mar 3 14:01:23.568819 systemd-logind[1542]: Removed session 68. Mar 3 14:01:24.285848 kubelet[2857]: E0303 14:01:24.277832 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:28.578251 systemd[1]: Started sshd@68-10.0.0.111:22-10.0.0.1:46444.service - OpenSSH per-connection server daemon (10.0.0.1:46444). Mar 3 14:01:29.611177 sshd[8129]: Accepted publickey for core from 10.0.0.1 port 46444 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:29.634459 sshd-session[8129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:29.673869 systemd-logind[1542]: New session 69 of user core. Mar 3 14:01:29.700712 systemd[1]: Started session-69.scope - Session 69 of User core. Mar 3 14:01:31.037649 sshd[8169]: Connection closed by 10.0.0.1 port 46444 Mar 3 14:01:31.038891 sshd-session[8129]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:31.107079 systemd[1]: sshd@68-10.0.0.111:22-10.0.0.1:46444.service: Deactivated successfully. Mar 3 14:01:31.131417 systemd[1]: session-69.scope: Deactivated successfully. Mar 3 14:01:31.142620 systemd-logind[1542]: Session 69 logged out. Waiting for processes to exit. Mar 3 14:01:31.180166 systemd-logind[1542]: Removed session 69. Mar 3 14:01:32.286166 kubelet[2857]: E0303 14:01:32.286058 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:36.118648 systemd[1]: Started sshd@69-10.0.0.111:22-10.0.0.1:39292.service - OpenSSH per-connection server daemon (10.0.0.1:39292). Mar 3 14:01:36.592823 sshd[8189]: Accepted publickey for core from 10.0.0.1 port 39292 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:36.604200 sshd-session[8189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:36.687498 systemd-logind[1542]: New session 70 of user core. Mar 3 14:01:36.693173 systemd[1]: Started session-70.scope - Session 70 of User core. Mar 3 14:01:38.412813 sshd[8192]: Connection closed by 10.0.0.1 port 39292 Mar 3 14:01:38.396600 sshd-session[8189]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:38.444589 systemd[1]: sshd@69-10.0.0.111:22-10.0.0.1:39292.service: Deactivated successfully. Mar 3 14:01:38.549833 systemd[1]: session-70.scope: Deactivated successfully. Mar 3 14:01:38.590260 systemd-logind[1542]: Session 70 logged out. Waiting for processes to exit. Mar 3 14:01:38.617128 systemd-logind[1542]: Removed session 70. Mar 3 14:01:42.302816 kubelet[2857]: E0303 14:01:42.299228 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:43.475614 systemd[1]: Started sshd@70-10.0.0.111:22-10.0.0.1:43756.service - OpenSSH per-connection server daemon (10.0.0.1:43756). Mar 3 14:01:43.790189 sshd[8229]: Accepted publickey for core from 10.0.0.1 port 43756 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:43.808100 sshd-session[8229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:43.863080 systemd-logind[1542]: New session 71 of user core. Mar 3 14:01:43.880316 systemd[1]: Started session-71.scope - Session 71 of User core. Mar 3 14:01:44.820099 sshd[8232]: Connection closed by 10.0.0.1 port 43756 Mar 3 14:01:44.833207 sshd-session[8229]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:44.862343 systemd-logind[1542]: Session 71 logged out. Waiting for processes to exit. Mar 3 14:01:44.871326 systemd[1]: sshd@70-10.0.0.111:22-10.0.0.1:43756.service: Deactivated successfully. Mar 3 14:01:44.893189 systemd[1]: session-71.scope: Deactivated successfully. Mar 3 14:01:44.960134 systemd-logind[1542]: Removed session 71. Mar 3 14:01:49.867837 systemd[1]: Started sshd@71-10.0.0.111:22-10.0.0.1:43780.service - OpenSSH per-connection server daemon (10.0.0.1:43780). Mar 3 14:01:50.236289 sshd[8246]: Accepted publickey for core from 10.0.0.1 port 43780 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:50.275685 sshd-session[8246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:50.309773 systemd-logind[1542]: New session 72 of user core. Mar 3 14:01:50.345530 systemd[1]: Started session-72.scope - Session 72 of User core. Mar 3 14:01:51.041851 sshd[8249]: Connection closed by 10.0.0.1 port 43780 Mar 3 14:01:51.037514 sshd-session[8246]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:51.075111 systemd[1]: sshd@71-10.0.0.111:22-10.0.0.1:43780.service: Deactivated successfully. Mar 3 14:01:51.101863 systemd[1]: session-72.scope: Deactivated successfully. Mar 3 14:01:51.138339 systemd-logind[1542]: Session 72 logged out. Waiting for processes to exit. Mar 3 14:01:51.178110 systemd-logind[1542]: Removed session 72. Mar 3 14:01:54.284584 kubelet[2857]: E0303 14:01:54.279340 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:01:56.146851 systemd[1]: Started sshd@72-10.0.0.111:22-10.0.0.1:53408.service - OpenSSH per-connection server daemon (10.0.0.1:53408). Mar 3 14:01:56.533128 sshd[8297]: Accepted publickey for core from 10.0.0.1 port 53408 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:01:56.545758 sshd-session[8297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:01:56.590098 systemd-logind[1542]: New session 73 of user core. Mar 3 14:01:56.656072 systemd[1]: Started session-73.scope - Session 73 of User core. Mar 3 14:01:57.565538 sshd[8330]: Connection closed by 10.0.0.1 port 53408 Mar 3 14:01:57.567337 sshd-session[8297]: pam_unix(sshd:session): session closed for user core Mar 3 14:01:57.585349 systemd[1]: sshd@72-10.0.0.111:22-10.0.0.1:53408.service: Deactivated successfully. Mar 3 14:01:57.595113 systemd[1]: session-73.scope: Deactivated successfully. Mar 3 14:01:57.611800 systemd-logind[1542]: Session 73 logged out. Waiting for processes to exit. Mar 3 14:01:57.645709 systemd-logind[1542]: Removed session 73. Mar 3 14:02:02.617577 systemd[1]: Started sshd@73-10.0.0.111:22-10.0.0.1:56838.service - OpenSSH per-connection server daemon (10.0.0.1:56838). Mar 3 14:02:02.989658 sshd[8371]: Accepted publickey for core from 10.0.0.1 port 56838 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:02.988041 sshd-session[8371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:03.073259 systemd-logind[1542]: New session 74 of user core. Mar 3 14:02:03.095370 systemd[1]: Started session-74.scope - Session 74 of User core. Mar 3 14:02:04.178609 sshd[8375]: Connection closed by 10.0.0.1 port 56838 Mar 3 14:02:04.177767 sshd-session[8371]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:04.207368 systemd[1]: sshd@73-10.0.0.111:22-10.0.0.1:56838.service: Deactivated successfully. Mar 3 14:02:04.228851 systemd[1]: session-74.scope: Deactivated successfully. Mar 3 14:02:04.233483 systemd-logind[1542]: Session 74 logged out. Waiting for processes to exit. Mar 3 14:02:04.243305 systemd-logind[1542]: Removed session 74. Mar 3 14:02:09.258168 systemd[1]: Started sshd@74-10.0.0.111:22-10.0.0.1:56848.service - OpenSSH per-connection server daemon (10.0.0.1:56848). Mar 3 14:02:09.639577 sshd[8415]: Accepted publickey for core from 10.0.0.1 port 56848 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:09.651217 sshd-session[8415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:09.714850 systemd-logind[1542]: New session 75 of user core. Mar 3 14:02:09.758752 systemd[1]: Started session-75.scope - Session 75 of User core. Mar 3 14:02:10.438566 sshd[8419]: Connection closed by 10.0.0.1 port 56848 Mar 3 14:02:10.437864 sshd-session[8415]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:10.461705 systemd[1]: sshd@74-10.0.0.111:22-10.0.0.1:56848.service: Deactivated successfully. Mar 3 14:02:10.478718 systemd[1]: session-75.scope: Deactivated successfully. Mar 3 14:02:10.489601 systemd-logind[1542]: Session 75 logged out. Waiting for processes to exit. Mar 3 14:02:10.503538 systemd-logind[1542]: Removed session 75. Mar 3 14:02:15.504103 systemd[1]: Started sshd@75-10.0.0.111:22-10.0.0.1:57772.service - OpenSSH per-connection server daemon (10.0.0.1:57772). Mar 3 14:02:15.741721 sshd[8432]: Accepted publickey for core from 10.0.0.1 port 57772 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:15.748403 sshd-session[8432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:15.775097 systemd-logind[1542]: New session 76 of user core. Mar 3 14:02:15.805792 systemd[1]: Started session-76.scope - Session 76 of User core. Mar 3 14:02:16.461784 sshd[8435]: Connection closed by 10.0.0.1 port 57772 Mar 3 14:02:16.466750 sshd-session[8432]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:16.483567 systemd[1]: sshd@75-10.0.0.111:22-10.0.0.1:57772.service: Deactivated successfully. Mar 3 14:02:16.500753 systemd[1]: session-76.scope: Deactivated successfully. Mar 3 14:02:16.507772 systemd-logind[1542]: Session 76 logged out. Waiting for processes to exit. Mar 3 14:02:16.516059 systemd-logind[1542]: Removed session 76. Mar 3 14:02:18.281068 kubelet[2857]: E0303 14:02:18.276249 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:21.522573 systemd[1]: Started sshd@76-10.0.0.111:22-10.0.0.1:44018.service - OpenSSH per-connection server daemon (10.0.0.1:44018). Mar 3 14:02:21.766140 sshd[8448]: Accepted publickey for core from 10.0.0.1 port 44018 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:21.775250 sshd-session[8448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:21.822069 systemd-logind[1542]: New session 77 of user core. Mar 3 14:02:21.831216 systemd[1]: Started session-77.scope - Session 77 of User core. Mar 3 14:02:22.474610 sshd[8470]: Connection closed by 10.0.0.1 port 44018 Mar 3 14:02:22.476550 sshd-session[8448]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:22.492667 systemd[1]: sshd@76-10.0.0.111:22-10.0.0.1:44018.service: Deactivated successfully. Mar 3 14:02:22.497385 systemd[1]: session-77.scope: Deactivated successfully. Mar 3 14:02:22.504233 systemd-logind[1542]: Session 77 logged out. Waiting for processes to exit. Mar 3 14:02:22.510130 systemd-logind[1542]: Removed session 77. Mar 3 14:02:27.549803 systemd[1]: Started sshd@77-10.0.0.111:22-10.0.0.1:44052.service - OpenSSH per-connection server daemon (10.0.0.1:44052). Mar 3 14:02:27.846131 sshd[8487]: Accepted publickey for core from 10.0.0.1 port 44052 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:27.852798 sshd-session[8487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:27.886604 systemd-logind[1542]: New session 78 of user core. Mar 3 14:02:27.925888 systemd[1]: Started session-78.scope - Session 78 of User core. Mar 3 14:02:28.658692 sshd[8490]: Connection closed by 10.0.0.1 port 44052 Mar 3 14:02:28.662235 sshd-session[8487]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:28.696403 systemd[1]: sshd@77-10.0.0.111:22-10.0.0.1:44052.service: Deactivated successfully. Mar 3 14:02:28.732187 systemd[1]: session-78.scope: Deactivated successfully. Mar 3 14:02:28.747308 systemd-logind[1542]: Session 78 logged out. Waiting for processes to exit. Mar 3 14:02:28.762604 systemd-logind[1542]: Removed session 78. Mar 3 14:02:29.292662 kubelet[2857]: E0303 14:02:29.292622 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:33.697620 systemd[1]: Started sshd@78-10.0.0.111:22-10.0.0.1:35230.service - OpenSSH per-connection server daemon (10.0.0.1:35230). Mar 3 14:02:33.956432 sshd[8531]: Accepted publickey for core from 10.0.0.1 port 35230 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:33.967869 sshd-session[8531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:34.017562 systemd-logind[1542]: New session 79 of user core. Mar 3 14:02:34.053631 systemd[1]: Started session-79.scope - Session 79 of User core. Mar 3 14:02:34.639128 sshd[8536]: Connection closed by 10.0.0.1 port 35230 Mar 3 14:02:34.643222 sshd-session[8531]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:34.662098 systemd[1]: sshd@78-10.0.0.111:22-10.0.0.1:35230.service: Deactivated successfully. Mar 3 14:02:34.672598 systemd[1]: session-79.scope: Deactivated successfully. Mar 3 14:02:34.681820 systemd-logind[1542]: Session 79 logged out. Waiting for processes to exit. Mar 3 14:02:34.686453 systemd-logind[1542]: Removed session 79. Mar 3 14:02:36.274982 kubelet[2857]: E0303 14:02:36.274861 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:39.686863 systemd[1]: Started sshd@79-10.0.0.111:22-10.0.0.1:35242.service - OpenSSH per-connection server daemon (10.0.0.1:35242). Mar 3 14:02:40.040635 sshd[8574]: Accepted publickey for core from 10.0.0.1 port 35242 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:40.049825 sshd-session[8574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:40.099570 systemd-logind[1542]: New session 80 of user core. Mar 3 14:02:40.137300 systemd[1]: Started session-80.scope - Session 80 of User core. Mar 3 14:02:40.908050 sshd[8577]: Connection closed by 10.0.0.1 port 35242 Mar 3 14:02:40.907253 sshd-session[8574]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:40.949711 systemd[1]: sshd@79-10.0.0.111:22-10.0.0.1:35242.service: Deactivated successfully. Mar 3 14:02:40.957339 systemd[1]: session-80.scope: Deactivated successfully. Mar 3 14:02:40.983105 systemd-logind[1542]: Session 80 logged out. Waiting for processes to exit. Mar 3 14:02:40.993464 systemd-logind[1542]: Removed session 80. Mar 3 14:02:46.132415 systemd[1]: Started sshd@80-10.0.0.111:22-10.0.0.1:56200.service - OpenSSH per-connection server daemon (10.0.0.1:56200). Mar 3 14:02:46.735174 sshd[8591]: Accepted publickey for core from 10.0.0.1 port 56200 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:46.765692 sshd-session[8591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:46.867270 systemd-logind[1542]: New session 81 of user core. Mar 3 14:02:46.948214 systemd[1]: Started session-81.scope - Session 81 of User core. Mar 3 14:02:47.765796 sshd[8605]: Connection closed by 10.0.0.1 port 56200 Mar 3 14:02:47.762278 sshd-session[8591]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:47.792340 systemd[1]: sshd@80-10.0.0.111:22-10.0.0.1:56200.service: Deactivated successfully. Mar 3 14:02:47.802234 systemd[1]: session-81.scope: Deactivated successfully. Mar 3 14:02:47.813588 systemd-logind[1542]: Session 81 logged out. Waiting for processes to exit. Mar 3 14:02:47.837455 systemd-logind[1542]: Removed session 81. Mar 3 14:02:48.289079 kubelet[2857]: E0303 14:02:48.285656 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:48.289850 kubelet[2857]: E0303 14:02:48.289332 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:02:52.834620 systemd[1]: Started sshd@81-10.0.0.111:22-10.0.0.1:51426.service - OpenSSH per-connection server daemon (10.0.0.1:51426). Mar 3 14:02:53.213807 sshd[8646]: Accepted publickey for core from 10.0.0.1 port 51426 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:53.219732 sshd-session[8646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:53.253827 systemd-logind[1542]: New session 82 of user core. Mar 3 14:02:53.268380 systemd[1]: Started session-82.scope - Session 82 of User core. Mar 3 14:02:53.750098 sshd[8649]: Connection closed by 10.0.0.1 port 51426 Mar 3 14:02:53.756061 sshd-session[8646]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:53.799747 systemd[1]: sshd@81-10.0.0.111:22-10.0.0.1:51426.service: Deactivated successfully. Mar 3 14:02:53.825320 systemd[1]: session-82.scope: Deactivated successfully. Mar 3 14:02:53.849099 systemd-logind[1542]: Session 82 logged out. Waiting for processes to exit. Mar 3 14:02:53.864666 systemd-logind[1542]: Removed session 82. Mar 3 14:02:58.813263 systemd[1]: Started sshd@82-10.0.0.111:22-10.0.0.1:51458.service - OpenSSH per-connection server daemon (10.0.0.1:51458). Mar 3 14:02:59.035729 sshd[8733]: Accepted publickey for core from 10.0.0.1 port 51458 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:02:59.041188 sshd-session[8733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:02:59.077044 systemd-logind[1542]: New session 83 of user core. Mar 3 14:02:59.086385 systemd[1]: Started session-83.scope - Session 83 of User core. Mar 3 14:02:59.606734 sshd[8738]: Connection closed by 10.0.0.1 port 51458 Mar 3 14:02:59.606299 sshd-session[8733]: pam_unix(sshd:session): session closed for user core Mar 3 14:02:59.625414 systemd[1]: sshd@82-10.0.0.111:22-10.0.0.1:51458.service: Deactivated successfully. Mar 3 14:02:59.639756 systemd[1]: session-83.scope: Deactivated successfully. Mar 3 14:02:59.651387 systemd-logind[1542]: Session 83 logged out. Waiting for processes to exit. Mar 3 14:02:59.660138 systemd-logind[1542]: Removed session 83. Mar 3 14:03:04.659203 systemd[1]: Started sshd@83-10.0.0.111:22-10.0.0.1:57266.service - OpenSSH per-connection server daemon (10.0.0.1:57266). Mar 3 14:03:04.900134 sshd[8753]: Accepted publickey for core from 10.0.0.1 port 57266 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:04.902504 sshd-session[8753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:04.941069 systemd-logind[1542]: New session 84 of user core. Mar 3 14:03:04.985203 systemd[1]: Started session-84.scope - Session 84 of User core. Mar 3 14:03:05.630097 sshd[8756]: Connection closed by 10.0.0.1 port 57266 Mar 3 14:03:05.627721 sshd-session[8753]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:05.644179 systemd[1]: sshd@83-10.0.0.111:22-10.0.0.1:57266.service: Deactivated successfully. Mar 3 14:03:05.656434 systemd[1]: session-84.scope: Deactivated successfully. Mar 3 14:03:05.663449 systemd-logind[1542]: Session 84 logged out. Waiting for processes to exit. Mar 3 14:03:05.678615 systemd-logind[1542]: Removed session 84. Mar 3 14:03:10.276842 kubelet[2857]: E0303 14:03:10.276273 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:10.733106 systemd[1]: Started sshd@84-10.0.0.111:22-10.0.0.1:59348.service - OpenSSH per-connection server daemon (10.0.0.1:59348). Mar 3 14:03:11.004654 sshd[8812]: Accepted publickey for core from 10.0.0.1 port 59348 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:11.005508 sshd-session[8812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:11.034514 systemd-logind[1542]: New session 85 of user core. Mar 3 14:03:11.053785 systemd[1]: Started session-85.scope - Session 85 of User core. Mar 3 14:03:11.514070 sshd[8815]: Connection closed by 10.0.0.1 port 59348 Mar 3 14:03:11.516794 sshd-session[8812]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:11.540181 systemd[1]: sshd@84-10.0.0.111:22-10.0.0.1:59348.service: Deactivated successfully. Mar 3 14:03:11.552990 systemd[1]: session-85.scope: Deactivated successfully. Mar 3 14:03:11.559688 systemd-logind[1542]: Session 85 logged out. Waiting for processes to exit. Mar 3 14:03:11.573373 systemd-logind[1542]: Removed session 85. Mar 3 14:03:16.283164 kubelet[2857]: E0303 14:03:16.279535 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:16.602315 systemd[1]: Started sshd@85-10.0.0.111:22-10.0.0.1:59376.service - OpenSSH per-connection server daemon (10.0.0.1:59376). Mar 3 14:03:16.862738 sshd[8829]: Accepted publickey for core from 10.0.0.1 port 59376 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:16.872069 sshd-session[8829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:16.936306 systemd-logind[1542]: New session 86 of user core. Mar 3 14:03:16.955282 systemd[1]: Started session-86.scope - Session 86 of User core. Mar 3 14:03:17.535062 sshd[8832]: Connection closed by 10.0.0.1 port 59376 Mar 3 14:03:17.535717 sshd-session[8829]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:17.554021 systemd[1]: sshd@85-10.0.0.111:22-10.0.0.1:59376.service: Deactivated successfully. Mar 3 14:03:17.560484 systemd[1]: session-86.scope: Deactivated successfully. Mar 3 14:03:17.565180 systemd-logind[1542]: Session 86 logged out. Waiting for processes to exit. Mar 3 14:03:17.578475 systemd-logind[1542]: Removed session 86. Mar 3 14:03:22.583563 systemd[1]: Started sshd@86-10.0.0.111:22-10.0.0.1:50754.service - OpenSSH per-connection server daemon (10.0.0.1:50754). Mar 3 14:03:22.792316 sshd[8867]: Accepted publickey for core from 10.0.0.1 port 50754 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:22.802083 sshd-session[8867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:22.832197 systemd-logind[1542]: New session 87 of user core. Mar 3 14:03:22.846800 systemd[1]: Started session-87.scope - Session 87 of User core. Mar 3 14:03:23.251009 sshd[8870]: Connection closed by 10.0.0.1 port 50754 Mar 3 14:03:23.251165 sshd-session[8867]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:23.276786 systemd[1]: sshd@86-10.0.0.111:22-10.0.0.1:50754.service: Deactivated successfully. Mar 3 14:03:23.297041 systemd[1]: session-87.scope: Deactivated successfully. Mar 3 14:03:23.303806 systemd-logind[1542]: Session 87 logged out. Waiting for processes to exit. Mar 3 14:03:23.315728 systemd-logind[1542]: Removed session 87. Mar 3 14:03:28.309020 systemd[1]: Started sshd@87-10.0.0.111:22-10.0.0.1:50768.service - OpenSSH per-connection server daemon (10.0.0.1:50768). Mar 3 14:03:28.461162 sshd[8896]: Accepted publickey for core from 10.0.0.1 port 50768 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:28.465762 sshd-session[8896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:28.497270 systemd-logind[1542]: New session 88 of user core. Mar 3 14:03:28.509442 systemd[1]: Started session-88.scope - Session 88 of User core. Mar 3 14:03:29.096111 sshd[8899]: Connection closed by 10.0.0.1 port 50768 Mar 3 14:03:29.097794 sshd-session[8896]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:29.112471 systemd[1]: sshd@87-10.0.0.111:22-10.0.0.1:50768.service: Deactivated successfully. Mar 3 14:03:29.127660 systemd[1]: session-88.scope: Deactivated successfully. Mar 3 14:03:29.135851 systemd-logind[1542]: Session 88 logged out. Waiting for processes to exit. Mar 3 14:03:29.139389 systemd-logind[1542]: Removed session 88. Mar 3 14:03:33.280496 kubelet[2857]: E0303 14:03:33.277593 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:34.143169 systemd[1]: Started sshd@88-10.0.0.111:22-10.0.0.1:38240.service - OpenSSH per-connection server daemon (10.0.0.1:38240). Mar 3 14:03:34.688825 sshd[8945]: Accepted publickey for core from 10.0.0.1 port 38240 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:34.692714 sshd-session[8945]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:34.740454 systemd-logind[1542]: New session 89 of user core. Mar 3 14:03:34.755141 systemd[1]: Started session-89.scope - Session 89 of User core. Mar 3 14:03:35.529529 sshd[8948]: Connection closed by 10.0.0.1 port 38240 Mar 3 14:03:35.532268 sshd-session[8945]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:35.548746 systemd[1]: sshd@88-10.0.0.111:22-10.0.0.1:38240.service: Deactivated successfully. Mar 3 14:03:35.561558 systemd[1]: session-89.scope: Deactivated successfully. Mar 3 14:03:35.572224 systemd-logind[1542]: Session 89 logged out. Waiting for processes to exit. Mar 3 14:03:35.590729 systemd-logind[1542]: Removed session 89. Mar 3 14:03:37.296067 kubelet[2857]: E0303 14:03:37.293436 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:40.692086 systemd[1]: Started sshd@89-10.0.0.111:22-10.0.0.1:51192.service - OpenSSH per-connection server daemon (10.0.0.1:51192). Mar 3 14:03:41.431812 sshd[8985]: Accepted publickey for core from 10.0.0.1 port 51192 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:41.441004 sshd-session[8985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:41.698273 systemd-logind[1542]: New session 90 of user core. Mar 3 14:03:41.783421 systemd[1]: Started session-90.scope - Session 90 of User core. Mar 3 14:03:43.329872 sshd[8988]: Connection closed by 10.0.0.1 port 51192 Mar 3 14:03:43.325717 sshd-session[8985]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:43.341395 systemd[1]: sshd@89-10.0.0.111:22-10.0.0.1:51192.service: Deactivated successfully. Mar 3 14:03:43.358870 systemd[1]: session-90.scope: Deactivated successfully. Mar 3 14:03:43.371566 systemd-logind[1542]: Session 90 logged out. Waiting for processes to exit. Mar 3 14:03:43.382869 systemd-logind[1542]: Removed session 90. Mar 3 14:03:46.293137 kubelet[2857]: E0303 14:03:46.281815 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 3 14:03:48.363202 systemd[1]: Started sshd@90-10.0.0.111:22-10.0.0.1:51218.service - OpenSSH per-connection server daemon (10.0.0.1:51218). Mar 3 14:03:48.568084 sshd[9003]: Accepted publickey for core from 10.0.0.1 port 51218 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:48.573949 sshd-session[9003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:48.616559 systemd-logind[1542]: New session 91 of user core. Mar 3 14:03:48.634329 systemd[1]: Started session-91.scope - Session 91 of User core. Mar 3 14:03:49.069546 sshd[9006]: Connection closed by 10.0.0.1 port 51218 Mar 3 14:03:49.070232 sshd-session[9003]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:49.089850 systemd[1]: sshd@90-10.0.0.111:22-10.0.0.1:51218.service: Deactivated successfully. Mar 3 14:03:49.100723 systemd[1]: session-91.scope: Deactivated successfully. Mar 3 14:03:49.124268 systemd-logind[1542]: Session 91 logged out. Waiting for processes to exit. Mar 3 14:03:49.130154 systemd-logind[1542]: Removed session 91. Mar 3 14:03:54.108105 systemd[1]: Started sshd@91-10.0.0.111:22-10.0.0.1:34232.service - OpenSSH per-connection server daemon (10.0.0.1:34232). Mar 3 14:03:54.291339 sshd[9040]: Accepted publickey for core from 10.0.0.1 port 34232 ssh2: RSA SHA256:32UhPjaQWCRIk78qzH4+8SldxGHDt/Mq5Nh3Isholw4 Mar 3 14:03:54.300281 sshd-session[9040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 3 14:03:54.357828 systemd-logind[1542]: New session 92 of user core. Mar 3 14:03:54.381865 systemd[1]: Started session-92.scope - Session 92 of User core. Mar 3 14:03:54.771776 sshd[9043]: Connection closed by 10.0.0.1 port 34232 Mar 3 14:03:54.771374 sshd-session[9040]: pam_unix(sshd:session): session closed for user core Mar 3 14:03:54.783140 systemd[1]: sshd@91-10.0.0.111:22-10.0.0.1:34232.service: Deactivated successfully. Mar 3 14:03:54.786874 systemd[1]: session-92.scope: Deactivated successfully. Mar 3 14:03:54.796545 systemd-logind[1542]: Session 92 logged out. Waiting for processes to exit. Mar 3 14:03:54.801008 systemd-logind[1542]: Removed session 92.