Mar 10 01:30:19.416391 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 9 23:01:22 -00 2026 Mar 10 01:30:19.416421 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 01:30:19.416435 kernel: BIOS-provided physical RAM map: Mar 10 01:30:19.416443 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 10 01:30:19.416451 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 10 01:30:19.416460 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 10 01:30:19.416472 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 10 01:30:19.416481 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 10 01:30:19.416489 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 10 01:30:19.416497 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 10 01:30:19.416505 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 10 01:30:19.416516 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 10 01:30:19.416524 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 10 01:30:19.416535 kernel: NX (Execute Disable) protection: active Mar 10 01:30:19.416546 kernel: APIC: Static calls initialized Mar 10 01:30:19.416555 kernel: SMBIOS 2.8 present. Mar 10 01:30:19.416566 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 10 01:30:19.416575 kernel: DMI: Memory slots populated: 1/1 Mar 10 01:30:19.416583 kernel: Hypervisor detected: KVM Mar 10 01:30:19.416593 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:30:19.416603 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 10 01:30:19.416612 kernel: kvm-clock: using sched offset of 11253479363 cycles Mar 10 01:30:19.416622 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 10 01:30:19.416633 kernel: tsc: Detected 2445.426 MHz processor Mar 10 01:30:19.416644 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 10 01:30:19.416655 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 10 01:30:19.416669 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 10 01:30:19.416681 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 10 01:30:19.416692 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 10 01:30:19.416701 kernel: Using GB pages for direct mapping Mar 10 01:30:19.416710 kernel: ACPI: Early table checksum verification disabled Mar 10 01:30:19.416718 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 10 01:30:19.416727 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416736 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416744 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416757 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 10 01:30:19.416768 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416776 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416785 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416794 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 10 01:30:19.416808 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 10 01:30:19.416821 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 10 01:30:19.416878 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 10 01:30:19.416888 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 10 01:30:19.416897 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 10 01:30:19.416907 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 10 01:30:19.416918 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 10 01:30:19.416929 kernel: No NUMA configuration found Mar 10 01:30:19.416938 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 10 01:30:19.416951 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Mar 10 01:30:19.416960 kernel: Zone ranges: Mar 10 01:30:19.416970 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 10 01:30:19.416980 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 10 01:30:19.416991 kernel: Normal empty Mar 10 01:30:19.417000 kernel: Device empty Mar 10 01:30:19.417011 kernel: Movable zone start for each node Mar 10 01:30:19.417022 kernel: Early memory node ranges Mar 10 01:30:19.417033 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 10 01:30:19.417045 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 10 01:30:19.417061 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 10 01:30:19.417071 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 10 01:30:19.417080 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 10 01:30:19.417089 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 10 01:30:19.417098 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 10 01:30:19.417107 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 10 01:30:19.417116 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 10 01:30:19.417199 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 10 01:30:19.417213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 10 01:30:19.417229 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 10 01:30:19.417238 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 10 01:30:19.417247 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 10 01:30:19.417256 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 10 01:30:19.417265 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 10 01:30:19.417274 kernel: TSC deadline timer available Mar 10 01:30:19.417286 kernel: CPU topo: Max. logical packages: 1 Mar 10 01:30:19.417297 kernel: CPU topo: Max. logical dies: 1 Mar 10 01:30:19.417306 kernel: CPU topo: Max. dies per package: 1 Mar 10 01:30:19.417319 kernel: CPU topo: Max. threads per core: 1 Mar 10 01:30:19.417328 kernel: CPU topo: Num. cores per package: 4 Mar 10 01:30:19.417337 kernel: CPU topo: Num. threads per package: 4 Mar 10 01:30:19.417347 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Mar 10 01:30:19.417359 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 10 01:30:19.417368 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 10 01:30:19.417377 kernel: kvm-guest: setup PV sched yield Mar 10 01:30:19.417386 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 10 01:30:19.417395 kernel: Booting paravirtualized kernel on KVM Mar 10 01:30:19.417410 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 10 01:30:19.417420 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 10 01:30:19.417430 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Mar 10 01:30:19.417439 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Mar 10 01:30:19.417449 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 10 01:30:19.417459 kernel: kvm-guest: PV spinlocks enabled Mar 10 01:30:19.417470 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 10 01:30:19.417483 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 01:30:19.417499 kernel: random: crng init done Mar 10 01:30:19.417510 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 10 01:30:19.417522 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 10 01:30:19.417531 kernel: Fallback order for Node 0: 0 Mar 10 01:30:19.417540 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Mar 10 01:30:19.417549 kernel: Policy zone: DMA32 Mar 10 01:30:19.417558 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 10 01:30:19.417567 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 10 01:30:19.417577 kernel: ftrace: allocating 40099 entries in 157 pages Mar 10 01:30:19.417589 kernel: ftrace: allocated 157 pages with 5 groups Mar 10 01:30:19.417599 kernel: Dynamic Preempt: voluntary Mar 10 01:30:19.417608 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 10 01:30:19.417620 kernel: rcu: RCU event tracing is enabled. Mar 10 01:30:19.417630 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 10 01:30:19.417639 kernel: Trampoline variant of Tasks RCU enabled. Mar 10 01:30:19.417648 kernel: Rude variant of Tasks RCU enabled. Mar 10 01:30:19.417657 kernel: Tracing variant of Tasks RCU enabled. Mar 10 01:30:19.417667 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 10 01:30:19.417683 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 10 01:30:19.417692 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:30:19.417701 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:30:19.417710 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 10 01:30:19.417720 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 10 01:30:19.417730 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 10 01:30:19.417752 kernel: Console: colour VGA+ 80x25 Mar 10 01:30:19.417765 kernel: printk: legacy console [ttyS0] enabled Mar 10 01:30:19.417774 kernel: ACPI: Core revision 20240827 Mar 10 01:30:19.417783 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 10 01:30:19.417793 kernel: APIC: Switch to symmetric I/O mode setup Mar 10 01:30:19.417805 kernel: x2apic enabled Mar 10 01:30:19.417818 kernel: APIC: Switched APIC routing to: physical x2apic Mar 10 01:30:19.417879 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 10 01:30:19.417894 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 10 01:30:19.417906 kernel: kvm-guest: setup PV IPIs Mar 10 01:30:19.417917 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 10 01:30:19.417932 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 10 01:30:19.417944 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 10 01:30:19.417957 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 10 01:30:19.417968 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 10 01:30:19.417979 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 10 01:30:19.417991 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 10 01:30:19.418003 kernel: Spectre V2 : Mitigation: Retpolines Mar 10 01:30:19.418015 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 10 01:30:19.418027 kernel: Speculative Store Bypass: Vulnerable Mar 10 01:30:19.418043 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 10 01:30:19.418056 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 10 01:30:19.418068 kernel: active return thunk: srso_alias_return_thunk Mar 10 01:30:19.418080 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 10 01:30:19.418092 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 10 01:30:19.418104 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 10 01:30:19.418116 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 10 01:30:19.418197 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 10 01:30:19.418216 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 10 01:30:19.418229 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 10 01:30:19.418242 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 10 01:30:19.418254 kernel: Freeing SMP alternatives memory: 32K Mar 10 01:30:19.418266 kernel: pid_max: default: 32768 minimum: 301 Mar 10 01:30:19.418278 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 10 01:30:19.418291 kernel: landlock: Up and running. Mar 10 01:30:19.418303 kernel: SELinux: Initializing. Mar 10 01:30:19.418315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:30:19.418332 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 10 01:30:19.418345 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 10 01:30:19.418358 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 10 01:30:19.418369 kernel: signal: max sigframe size: 1776 Mar 10 01:30:19.418379 kernel: rcu: Hierarchical SRCU implementation. Mar 10 01:30:19.418389 kernel: rcu: Max phase no-delay instances is 400. Mar 10 01:30:19.418398 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 10 01:30:19.418408 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 10 01:30:19.418418 kernel: smp: Bringing up secondary CPUs ... Mar 10 01:30:19.418432 kernel: smpboot: x86: Booting SMP configuration: Mar 10 01:30:19.418441 kernel: .... node #0, CPUs: #1 #2 #3 Mar 10 01:30:19.418451 kernel: smp: Brought up 1 node, 4 CPUs Mar 10 01:30:19.418460 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 10 01:30:19.418473 kernel: Memory: 2420720K/2571752K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46204K init, 2556K bss, 145096K reserved, 0K cma-reserved) Mar 10 01:30:19.418483 kernel: devtmpfs: initialized Mar 10 01:30:19.418492 kernel: x86/mm: Memory block size: 128MB Mar 10 01:30:19.418501 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 10 01:30:19.418511 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 10 01:30:19.418525 kernel: pinctrl core: initialized pinctrl subsystem Mar 10 01:30:19.418537 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 10 01:30:19.418547 kernel: audit: initializing netlink subsys (disabled) Mar 10 01:30:19.418557 kernel: audit: type=2000 audit(1773106209.261:1): state=initialized audit_enabled=0 res=1 Mar 10 01:30:19.418566 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 10 01:30:19.418575 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 10 01:30:19.418585 kernel: cpuidle: using governor menu Mar 10 01:30:19.418598 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 10 01:30:19.418609 kernel: dca service started, version 1.12.1 Mar 10 01:30:19.418622 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Mar 10 01:30:19.418631 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 10 01:30:19.418641 kernel: PCI: Using configuration type 1 for base access Mar 10 01:30:19.418652 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 10 01:30:19.418664 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 10 01:30:19.418674 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 10 01:30:19.418684 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 10 01:30:19.418693 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 10 01:30:19.418703 kernel: ACPI: Added _OSI(Module Device) Mar 10 01:30:19.418717 kernel: ACPI: Added _OSI(Processor Device) Mar 10 01:30:19.418728 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 10 01:30:19.418738 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 10 01:30:19.418749 kernel: ACPI: Interpreter enabled Mar 10 01:30:19.418760 kernel: ACPI: PM: (supports S0 S3 S5) Mar 10 01:30:19.418771 kernel: ACPI: Using IOAPIC for interrupt routing Mar 10 01:30:19.418784 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 10 01:30:19.418796 kernel: PCI: Using E820 reservations for host bridge windows Mar 10 01:30:19.418809 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 10 01:30:19.418824 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 10 01:30:19.420061 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 10 01:30:19.420353 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 10 01:30:19.420525 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 10 01:30:19.420543 kernel: PCI host bridge to bus 0000:00 Mar 10 01:30:19.420896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 10 01:30:19.421062 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 10 01:30:19.421305 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 10 01:30:19.421457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 10 01:30:19.421620 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 10 01:30:19.421770 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 10 01:30:19.421992 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 10 01:30:19.422348 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Mar 10 01:30:19.423118 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Mar 10 01:30:19.423369 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Mar 10 01:30:19.423533 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Mar 10 01:30:19.423906 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Mar 10 01:30:19.424245 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 10 01:30:19.424442 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 10 01:30:19.424627 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Mar 10 01:30:19.424810 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Mar 10 01:30:19.425046 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Mar 10 01:30:19.425362 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Mar 10 01:30:19.425531 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Mar 10 01:30:19.426414 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Mar 10 01:30:19.426622 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Mar 10 01:30:19.426931 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 10 01:30:19.427259 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Mar 10 01:30:19.427446 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Mar 10 01:30:19.427616 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 10 01:30:19.428912 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Mar 10 01:30:19.429317 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Mar 10 01:30:19.429503 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 10 01:30:19.429766 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Mar 10 01:30:19.429951 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Mar 10 01:30:19.430118 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Mar 10 01:30:19.430352 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Mar 10 01:30:19.430522 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Mar 10 01:30:19.430535 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 10 01:30:19.430543 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 10 01:30:19.430597 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 10 01:30:19.430604 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 10 01:30:19.430611 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 10 01:30:19.430618 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 10 01:30:19.430626 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 10 01:30:19.430633 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 10 01:30:19.430640 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 10 01:30:19.430647 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 10 01:30:19.430654 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 10 01:30:19.430663 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 10 01:30:19.430670 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 10 01:30:19.430677 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 10 01:30:19.430684 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 10 01:30:19.430691 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 10 01:30:19.430698 kernel: iommu: Default domain type: Translated Mar 10 01:30:19.430706 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 10 01:30:19.430713 kernel: PCI: Using ACPI for IRQ routing Mar 10 01:30:19.430720 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 10 01:30:19.430729 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 10 01:30:19.430737 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 10 01:30:19.430910 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 10 01:30:19.431031 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 10 01:30:19.431218 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 10 01:30:19.431230 kernel: vgaarb: loaded Mar 10 01:30:19.431238 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 10 01:30:19.431245 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 10 01:30:19.431257 kernel: clocksource: Switched to clocksource kvm-clock Mar 10 01:30:19.431264 kernel: VFS: Disk quotas dquot_6.6.0 Mar 10 01:30:19.431271 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 10 01:30:19.431278 kernel: pnp: PnP ACPI init Mar 10 01:30:19.431591 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 10 01:30:19.431607 kernel: pnp: PnP ACPI: found 6 devices Mar 10 01:30:19.431614 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 10 01:30:19.431622 kernel: NET: Registered PF_INET protocol family Mar 10 01:30:19.431629 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 10 01:30:19.431641 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 10 01:30:19.431648 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 10 01:30:19.431655 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 10 01:30:19.431662 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 10 01:30:19.431670 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 10 01:30:19.431677 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:30:19.431684 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 10 01:30:19.431691 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 10 01:30:19.431700 kernel: NET: Registered PF_XDP protocol family Mar 10 01:30:19.431910 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 10 01:30:19.432028 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 10 01:30:19.432270 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 10 01:30:19.432419 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 10 01:30:19.432567 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 10 01:30:19.432677 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 10 01:30:19.432687 kernel: PCI: CLS 0 bytes, default 64 Mar 10 01:30:19.432694 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x233fd7ba1b0, max_idle_ns: 440795295779 ns Mar 10 01:30:19.432712 kernel: Initialise system trusted keyrings Mar 10 01:30:19.432725 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 10 01:30:19.432737 kernel: Key type asymmetric registered Mar 10 01:30:19.432747 kernel: Asymmetric key parser 'x509' registered Mar 10 01:30:19.432757 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 10 01:30:19.432767 kernel: io scheduler mq-deadline registered Mar 10 01:30:19.432776 kernel: io scheduler kyber registered Mar 10 01:30:19.432786 kernel: io scheduler bfq registered Mar 10 01:30:19.432798 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 10 01:30:19.432818 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 10 01:30:19.432882 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 10 01:30:19.432897 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 10 01:30:19.432907 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 10 01:30:19.432917 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 10 01:30:19.432927 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 10 01:30:19.432937 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 10 01:30:19.432946 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 10 01:30:19.433319 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 10 01:30:19.433349 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 10 01:30:19.433515 kernel: rtc_cmos 00:04: registered as rtc0 Mar 10 01:30:19.433676 kernel: rtc_cmos 00:04: setting system clock to 2026-03-10T01:30:18 UTC (1773106218) Mar 10 01:30:19.433797 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 10 01:30:19.433808 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 10 01:30:19.433816 kernel: NET: Registered PF_INET6 protocol family Mar 10 01:30:19.433823 kernel: Segment Routing with IPv6 Mar 10 01:30:19.433894 kernel: In-situ OAM (IOAM) with IPv6 Mar 10 01:30:19.433908 kernel: NET: Registered PF_PACKET protocol family Mar 10 01:30:19.433915 kernel: Key type dns_resolver registered Mar 10 01:30:19.433922 kernel: IPI shorthand broadcast: enabled Mar 10 01:30:19.433930 kernel: sched_clock: Marking stable (9015041467, 676584184)->(10398287551, -706661900) Mar 10 01:30:19.433937 kernel: registered taskstats version 1 Mar 10 01:30:19.433944 kernel: Loading compiled-in X.509 certificates Mar 10 01:30:19.433952 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 64a6e3ad023f02465a8c66e81554b4b2e64fb972' Mar 10 01:30:19.433965 kernel: Demotion targets for Node 0: null Mar 10 01:30:19.433976 kernel: Key type .fscrypt registered Mar 10 01:30:19.433988 kernel: Key type fscrypt-provisioning registered Mar 10 01:30:19.433996 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 10 01:30:19.434003 kernel: ima: Allocated hash algorithm: sha1 Mar 10 01:30:19.434010 kernel: ima: No architecture policies found Mar 10 01:30:19.434017 kernel: clk: Disabling unused clocks Mar 10 01:30:19.434024 kernel: Warning: unable to open an initial console. Mar 10 01:30:19.434031 kernel: Freeing unused kernel image (initmem) memory: 46204K Mar 10 01:30:19.434039 kernel: Write protecting the kernel read-only data: 40960k Mar 10 01:30:19.434048 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 10 01:30:19.434055 kernel: Run /init as init process Mar 10 01:30:19.434062 kernel: with arguments: Mar 10 01:30:19.434069 kernel: /init Mar 10 01:30:19.434076 kernel: with environment: Mar 10 01:30:19.434083 kernel: HOME=/ Mar 10 01:30:19.434090 kernel: TERM=linux Mar 10 01:30:19.434098 systemd[1]: Successfully made /usr/ read-only. Mar 10 01:30:19.434109 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 01:30:19.434120 systemd[1]: Detected virtualization kvm. Mar 10 01:30:19.434188 systemd[1]: Detected architecture x86-64. Mar 10 01:30:19.434198 systemd[1]: Running in initrd. Mar 10 01:30:19.434205 systemd[1]: No hostname configured, using default hostname. Mar 10 01:30:19.434213 systemd[1]: Hostname set to . Mar 10 01:30:19.434220 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:30:19.434228 systemd[1]: Queued start job for default target initrd.target. Mar 10 01:30:19.434239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:30:19.434259 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:30:19.434269 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 10 01:30:19.434277 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:30:19.434285 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 10 01:30:19.434296 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 10 01:30:19.434305 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 10 01:30:19.434313 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 10 01:30:19.434322 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:30:19.434336 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:30:19.434350 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:30:19.434361 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:30:19.434372 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:30:19.434387 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:30:19.434398 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:30:19.434408 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:30:19.434423 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 10 01:30:19.434435 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 10 01:30:19.434446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:30:19.434457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:30:19.434467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:30:19.434478 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:30:19.434495 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 10 01:30:19.434508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:30:19.434520 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 10 01:30:19.434528 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 10 01:30:19.434536 systemd[1]: Starting systemd-fsck-usr.service... Mar 10 01:30:19.434544 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:30:19.434551 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:30:19.434559 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:30:19.434570 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 10 01:30:19.434581 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:30:19.434620 systemd-journald[202]: Collecting audit messages is disabled. Mar 10 01:30:19.434641 systemd[1]: Finished systemd-fsck-usr.service. Mar 10 01:30:19.434650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 10 01:30:19.434660 systemd-journald[202]: Journal started Mar 10 01:30:19.434678 systemd-journald[202]: Runtime Journal (/run/log/journal/b5ed4fc6b85540319024dd06b2e89cb5) is 6M, max 48.3M, 42.2M free. Mar 10 01:30:19.428997 systemd-modules-load[204]: Inserted module 'overlay' Mar 10 01:30:19.452777 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:30:19.458385 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:30:19.767322 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 10 01:30:20.175553 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 10 01:30:20.175582 kernel: Bridge firewalling registered Mar 10 01:30:20.162971 systemd-modules-load[204]: Inserted module 'br_netfilter' Mar 10 01:30:20.191686 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:30:20.194723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:30:20.210497 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 10 01:30:20.226654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:30:20.232013 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:30:20.240578 systemd-tmpfiles[216]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 10 01:30:20.248495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:30:20.261543 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:30:20.270488 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 10 01:30:20.306386 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:30:20.313399 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:30:20.320720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:30:20.335934 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bcd0808bf4ec60436f0ff2e8373a873eb88ae42d4ac26e6e6d81129499700895 Mar 10 01:30:20.422116 systemd-resolved[250]: Positive Trust Anchors: Mar 10 01:30:20.422263 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:30:20.422311 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:30:20.431074 systemd-resolved[250]: Defaulting to hostname 'linux'. Mar 10 01:30:20.434446 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:30:20.443915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:30:20.617426 kernel: SCSI subsystem initialized Mar 10 01:30:20.653498 kernel: Loading iSCSI transport class v2.0-870. Mar 10 01:30:20.697353 kernel: iscsi: registered transport (tcp) Mar 10 01:30:20.732318 kernel: iscsi: registered transport (qla4xxx) Mar 10 01:30:20.732404 kernel: QLogic iSCSI HBA Driver Mar 10 01:30:20.790039 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:30:20.845094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:30:20.848899 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:30:21.025417 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 10 01:30:21.035101 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 10 01:30:21.214064 kernel: raid6: avx2x4 gen() 19000 MB/s Mar 10 01:30:21.233653 kernel: raid6: avx2x2 gen() 19116 MB/s Mar 10 01:30:21.256647 kernel: raid6: avx2x1 gen() 5958 MB/s Mar 10 01:30:21.256783 kernel: raid6: using algorithm avx2x2 gen() 19116 MB/s Mar 10 01:30:21.280430 kernel: raid6: .... xor() 15106 MB/s, rmw enabled Mar 10 01:30:21.280604 kernel: raid6: using avx2x2 recovery algorithm Mar 10 01:30:21.336360 kernel: xor: automatically using best checksumming function avx Mar 10 01:30:21.966298 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 10 01:30:22.026002 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:30:22.037780 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:30:22.128414 systemd-udevd[454]: Using default interface naming scheme 'v255'. Mar 10 01:30:22.136908 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:30:22.143195 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 10 01:30:22.252738 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Mar 10 01:30:22.366977 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:30:22.376039 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:30:22.518076 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:30:22.524977 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 10 01:30:22.630215 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 10 01:30:22.650347 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 10 01:30:22.686997 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 10 01:30:22.687055 kernel: GPT:9289727 != 19775487 Mar 10 01:30:22.693788 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 10 01:30:22.694086 kernel: GPT:9289727 != 19775487 Mar 10 01:30:22.694112 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 10 01:30:22.694236 kernel: cryptd: max_cpu_qlen set to 1000 Mar 10 01:30:22.694259 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:30:22.712456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:30:22.728787 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Mar 10 01:30:22.722044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:30:22.745593 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:30:22.751321 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:30:22.768391 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 10 01:30:22.820941 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 10 01:30:22.822392 kernel: libata version 3.00 loaded. Mar 10 01:30:22.853568 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 10 01:30:22.867314 kernel: ahci 0000:00:1f.2: version 3.0 Mar 10 01:30:22.870329 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 10 01:30:22.873217 kernel: AES CTR mode by8 optimization enabled Mar 10 01:30:22.876244 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Mar 10 01:30:22.876439 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Mar 10 01:30:22.876649 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 10 01:30:22.902224 kernel: scsi host0: ahci Mar 10 01:30:22.907379 kernel: scsi host1: ahci Mar 10 01:30:22.910821 kernel: scsi host2: ahci Mar 10 01:30:22.911939 kernel: scsi host3: ahci Mar 10 01:30:22.915538 kernel: scsi host4: ahci Mar 10 01:30:22.916335 kernel: scsi host5: ahci Mar 10 01:30:22.917290 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Mar 10 01:30:22.917319 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Mar 10 01:30:22.917367 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Mar 10 01:30:22.917384 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Mar 10 01:30:22.917398 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Mar 10 01:30:22.917411 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Mar 10 01:30:22.920554 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 10 01:30:23.180375 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 10 01:30:23.191802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:30:23.227275 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 10 01:30:23.229076 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:30:23.280048 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 10 01:30:23.280092 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 10 01:30:23.280110 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 10 01:30:23.280217 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 10 01:30:23.280239 kernel: ata3.00: LPM support broken, forcing max_power Mar 10 01:30:23.280256 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 10 01:30:23.280271 kernel: ata3.00: applying bridge limits Mar 10 01:30:23.280285 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 10 01:30:23.280300 kernel: ata3.00: LPM support broken, forcing max_power Mar 10 01:30:23.280320 kernel: ata3.00: configured for UDMA/100 Mar 10 01:30:23.280335 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 10 01:30:23.238946 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 10 01:30:23.341981 disk-uuid[618]: Primary Header is updated. Mar 10 01:30:23.341981 disk-uuid[618]: Secondary Entries is updated. Mar 10 01:30:23.341981 disk-uuid[618]: Secondary Header is updated. Mar 10 01:30:23.358525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:30:23.418821 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 10 01:30:23.419682 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:30:23.419703 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 10 01:30:23.459931 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 10 01:30:23.941592 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 10 01:30:23.949819 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:30:23.957442 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:30:23.966249 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:30:24.001805 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 10 01:30:24.102359 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:30:24.398335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 10 01:30:24.408087 disk-uuid[619]: The operation has completed successfully. Mar 10 01:30:24.507262 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 10 01:30:24.507498 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 10 01:30:24.575644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 10 01:30:24.633643 sh[648]: Success Mar 10 01:30:24.705430 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 10 01:30:24.711011 kernel: device-mapper: uevent: version 1.0.3 Mar 10 01:30:24.720608 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 10 01:30:24.759224 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Mar 10 01:30:24.962491 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 10 01:30:24.973965 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 10 01:30:25.017472 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 10 01:30:25.048328 kernel: BTRFS: device fsid 91a17919-8e0b-4e39-b5e3-1547b6175986 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (660) Mar 10 01:30:25.061238 kernel: BTRFS info (device dm-0): first mount of filesystem 91a17919-8e0b-4e39-b5e3-1547b6175986 Mar 10 01:30:25.061293 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:30:25.105463 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 10 01:30:25.105543 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 10 01:30:25.109034 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 10 01:30:25.114850 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 10 01:30:25.129036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 10 01:30:25.132829 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 10 01:30:25.171425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 10 01:30:25.257111 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (685) Mar 10 01:30:25.267746 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 01:30:25.267818 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:30:25.304813 kernel: BTRFS info (device vda6): turning on async discard Mar 10 01:30:25.304946 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 01:30:25.323366 kernel: BTRFS info (device vda6): last unmount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 01:30:25.331290 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 10 01:30:25.340486 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 10 01:30:25.945576 kernel: hrtimer: interrupt took 2330229 ns Mar 10 01:30:26.269344 ignition[738]: Ignition 2.22.0 Mar 10 01:30:26.269407 ignition[738]: Stage: fetch-offline Mar 10 01:30:26.269469 ignition[738]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:30:26.269485 ignition[738]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:30:26.269726 ignition[738]: parsed url from cmdline: "" Mar 10 01:30:26.269733 ignition[738]: no config URL provided Mar 10 01:30:26.269745 ignition[738]: reading system config file "/usr/lib/ignition/user.ign" Mar 10 01:30:26.269759 ignition[738]: no config at "/usr/lib/ignition/user.ign" Mar 10 01:30:26.269799 ignition[738]: op(1): [started] loading QEMU firmware config module Mar 10 01:30:26.269806 ignition[738]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 10 01:30:26.335557 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:30:26.368956 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:30:26.427950 ignition[738]: op(1): [finished] loading QEMU firmware config module Mar 10 01:30:26.707443 systemd-networkd[839]: lo: Link UP Mar 10 01:30:26.707540 systemd-networkd[839]: lo: Gained carrier Mar 10 01:30:26.711810 systemd-networkd[839]: Enumeration completed Mar 10 01:30:26.713377 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:30:26.714300 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:30:26.714307 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:30:26.720124 systemd-networkd[839]: eth0: Link UP Mar 10 01:30:26.720983 systemd-networkd[839]: eth0: Gained carrier Mar 10 01:30:26.720998 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:30:26.734739 systemd[1]: Reached target network.target - Network. Mar 10 01:30:26.818341 systemd-networkd[839]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:30:26.974788 ignition[738]: parsing config with SHA512: 51448bd30d64ea0f6f542654d52bfc4e8911530e3b531ee3356161294bffa0909299544cedbf67d2bed6f9c54f6b144b7394ccf5285724934a3986e9803aeadb Mar 10 01:30:27.016946 unknown[738]: fetched base config from "system" Mar 10 01:30:27.017026 unknown[738]: fetched user config from "qemu" Mar 10 01:30:27.019063 ignition[738]: fetch-offline: fetch-offline passed Mar 10 01:30:27.019492 ignition[738]: Ignition finished successfully Mar 10 01:30:27.026797 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:30:27.035565 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 10 01:30:27.041353 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 10 01:30:27.958351 ignition[844]: Ignition 2.22.0 Mar 10 01:30:27.958419 ignition[844]: Stage: kargs Mar 10 01:30:27.959370 ignition[844]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:30:27.959387 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:30:27.962457 ignition[844]: kargs: kargs passed Mar 10 01:30:27.962526 ignition[844]: Ignition finished successfully Mar 10 01:30:27.992824 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 10 01:30:28.002426 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 10 01:30:28.334797 systemd-networkd[839]: eth0: Gained IPv6LL Mar 10 01:30:28.700541 ignition[852]: Ignition 2.22.0 Mar 10 01:30:28.700598 ignition[852]: Stage: disks Mar 10 01:30:28.703213 ignition[852]: no configs at "/usr/lib/ignition/base.d" Mar 10 01:30:28.703226 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:30:28.704438 ignition[852]: disks: disks passed Mar 10 01:30:28.704498 ignition[852]: Ignition finished successfully Mar 10 01:30:28.726763 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 10 01:30:28.734402 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 10 01:30:28.737296 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 10 01:30:28.750082 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:30:28.764388 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:30:28.768332 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:30:28.798415 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 10 01:30:28.850203 systemd-fsck[862]: ROOT: clean, 15/553520 files, 52789/553472 blocks Mar 10 01:30:28.860356 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 10 01:30:28.872203 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 10 01:30:29.238676 kernel: EXT4-fs (vda9): mounted filesystem 494bf987-03e9-4980-9fc3-4af435e63ebe r/w with ordered data mode. Quota mode: none. Mar 10 01:30:29.239483 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 10 01:30:29.248567 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 10 01:30:29.267375 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:30:29.274972 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 10 01:30:29.302644 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 10 01:30:29.329345 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (871) Mar 10 01:30:29.329409 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 01:30:29.329428 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:30:29.302838 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 10 01:30:29.365993 kernel: BTRFS info (device vda6): turning on async discard Mar 10 01:30:29.366033 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 01:30:29.302885 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:30:29.341867 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 10 01:30:29.368291 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 10 01:30:29.376359 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:30:29.545735 initrd-setup-root[895]: cut: /sysroot/etc/passwd: No such file or directory Mar 10 01:30:29.563308 initrd-setup-root[902]: cut: /sysroot/etc/group: No such file or directory Mar 10 01:30:29.580244 initrd-setup-root[909]: cut: /sysroot/etc/shadow: No such file or directory Mar 10 01:30:29.614405 initrd-setup-root[916]: cut: /sysroot/etc/gshadow: No such file or directory Mar 10 01:30:29.901962 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 10 01:30:29.922296 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 10 01:30:29.929277 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 10 01:30:29.961702 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 10 01:30:29.972577 kernel: BTRFS info (device vda6): last unmount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 01:30:30.020384 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 10 01:30:30.109016 ignition[985]: INFO : Ignition 2.22.0 Mar 10 01:30:30.113506 ignition[985]: INFO : Stage: mount Mar 10 01:30:30.113506 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:30:30.113506 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:30:30.129045 ignition[985]: INFO : mount: mount passed Mar 10 01:30:30.129045 ignition[985]: INFO : Ignition finished successfully Mar 10 01:30:30.142620 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 10 01:30:30.151316 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 10 01:30:30.248878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 10 01:30:30.331436 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (997) Mar 10 01:30:30.342067 kernel: BTRFS info (device vda6): first mount of filesystem ee81d5fa-b10d-48ad-a53f-95a2476266f6 Mar 10 01:30:30.342483 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 10 01:30:30.369431 kernel: BTRFS info (device vda6): turning on async discard Mar 10 01:30:30.369514 kernel: BTRFS info (device vda6): enabling free space tree Mar 10 01:30:30.374313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 10 01:30:30.555053 ignition[1014]: INFO : Ignition 2.22.0 Mar 10 01:30:30.555053 ignition[1014]: INFO : Stage: files Mar 10 01:30:30.555053 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:30:30.555053 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:30:30.576595 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Mar 10 01:30:30.576595 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 10 01:30:30.576595 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 10 01:30:30.616216 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 10 01:30:30.616216 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 10 01:30:30.616216 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 10 01:30:30.611080 unknown[1014]: wrote ssh authorized keys file for user: core Mar 10 01:30:30.645285 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:30:30.645285 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 10 01:30:30.760115 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 10 01:30:31.152401 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 10 01:30:31.166578 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:30:31.313891 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:30:31.313891 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:30:31.313891 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 10 01:30:31.658280 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 10 01:30:36.823584 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 10 01:30:36.840080 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 10 01:30:36.840080 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 10 01:30:36.862268 ignition[1014]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 10 01:30:37.408455 ignition[1014]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:30:37.434512 ignition[1014]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 10 01:30:37.441406 ignition[1014]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 10 01:30:37.441406 ignition[1014]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 10 01:30:37.441406 ignition[1014]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 10 01:30:37.441406 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:30:37.441406 ignition[1014]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 10 01:30:37.441406 ignition[1014]: INFO : files: files passed Mar 10 01:30:37.441406 ignition[1014]: INFO : Ignition finished successfully Mar 10 01:30:37.510398 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 10 01:30:37.522000 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 10 01:30:37.531798 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 10 01:30:37.560923 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 10 01:30:37.561272 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 10 01:30:37.575224 initrd-setup-root-after-ignition[1042]: grep: /sysroot/oem/oem-release: No such file or directory Mar 10 01:30:37.605573 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:30:37.612436 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:30:37.612436 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 10 01:30:37.609306 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:30:37.614569 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 10 01:30:37.628096 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 10 01:30:37.765807 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 10 01:30:37.766110 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 10 01:30:37.785671 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 10 01:30:37.793259 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 10 01:30:37.806295 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 10 01:30:37.808014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 10 01:30:37.872285 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:30:37.880388 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 10 01:30:37.936900 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:30:37.948678 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:30:37.951917 systemd[1]: Stopped target timers.target - Timer Units. Mar 10 01:30:37.963063 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 10 01:30:37.963381 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 10 01:30:37.988593 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 10 01:30:37.993439 systemd[1]: Stopped target basic.target - Basic System. Mar 10 01:30:38.001233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 10 01:30:38.006302 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 10 01:30:38.023380 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 10 01:30:38.040543 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 10 01:30:38.046273 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 10 01:30:38.068274 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 10 01:30:38.074536 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 10 01:30:38.087308 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 10 01:30:38.098396 systemd[1]: Stopped target swap.target - Swaps. Mar 10 01:30:38.111028 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 10 01:30:38.112824 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 10 01:30:38.128062 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:30:38.140418 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:30:38.148486 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 10 01:30:38.149616 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:30:38.158521 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 10 01:30:38.158662 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 10 01:30:38.178732 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 10 01:30:38.178930 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 10 01:30:38.184369 systemd[1]: Stopped target paths.target - Path Units. Mar 10 01:30:38.196513 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 10 01:30:38.200428 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:30:38.209096 systemd[1]: Stopped target slices.target - Slice Units. Mar 10 01:30:38.226712 systemd[1]: Stopped target sockets.target - Socket Units. Mar 10 01:30:38.240329 systemd[1]: iscsid.socket: Deactivated successfully. Mar 10 01:30:38.240487 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 10 01:30:38.246256 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 10 01:30:38.246393 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 10 01:30:38.262644 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 10 01:30:38.262833 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 10 01:30:38.271521 systemd[1]: ignition-files.service: Deactivated successfully. Mar 10 01:30:38.272025 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 10 01:30:38.278415 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 10 01:30:38.290315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 10 01:30:38.290745 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:30:38.330645 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 10 01:30:38.345317 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 10 01:30:38.345623 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:30:38.367513 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 10 01:30:38.367821 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 10 01:30:38.411203 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 10 01:30:38.411359 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 10 01:30:38.428105 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 10 01:30:38.438465 ignition[1070]: INFO : Ignition 2.22.0 Mar 10 01:30:38.438465 ignition[1070]: INFO : Stage: umount Mar 10 01:30:38.438465 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 10 01:30:38.438465 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 10 01:30:38.438465 ignition[1070]: INFO : umount: umount passed Mar 10 01:30:38.438465 ignition[1070]: INFO : Ignition finished successfully Mar 10 01:30:38.440291 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 10 01:30:38.440465 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 10 01:30:38.443763 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 10 01:30:38.443912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 10 01:30:38.468107 systemd[1]: Stopped target network.target - Network. Mar 10 01:30:38.474746 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 10 01:30:38.474854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 10 01:30:38.485742 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 10 01:30:38.485857 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 10 01:30:38.493674 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 10 01:30:38.493773 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 10 01:30:38.502627 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 10 01:30:38.502704 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 10 01:30:38.513383 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 10 01:30:38.513463 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 10 01:30:38.516525 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 10 01:30:38.524416 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 10 01:30:38.542499 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 10 01:30:38.542785 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 10 01:30:38.556843 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 10 01:30:38.557453 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 10 01:30:38.557525 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:30:38.569866 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 10 01:30:38.595074 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 10 01:30:38.595405 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 10 01:30:38.611819 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 10 01:30:38.612577 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 10 01:30:38.624982 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 10 01:30:38.625059 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:30:38.632015 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 10 01:30:38.642933 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 10 01:30:38.643073 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 10 01:30:38.657284 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 10 01:30:38.657396 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:30:38.670440 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 10 01:30:38.670534 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 10 01:30:38.684633 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:30:38.701247 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 10 01:30:38.735579 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 10 01:30:38.735908 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:30:38.744357 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 10 01:30:38.744446 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 10 01:30:38.752232 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 10 01:30:38.752296 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:30:38.762454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 10 01:30:38.762541 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 10 01:30:38.765999 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 10 01:30:38.766076 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 10 01:30:38.771911 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 10 01:30:38.772057 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 10 01:30:38.791352 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 10 01:30:38.797234 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 10 01:30:38.797409 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:30:38.821403 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 10 01:30:38.821518 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:30:38.831526 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 10 01:30:38.831618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:30:38.849834 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 10 01:30:38.850047 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 10 01:30:38.861561 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 10 01:30:38.861780 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 10 01:30:38.870454 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 10 01:30:38.875080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 10 01:30:38.926059 systemd[1]: Switching root. Mar 10 01:30:39.002386 systemd-journald[202]: Journal stopped Mar 10 01:30:43.065728 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 10 01:30:43.065906 kernel: SELinux: policy capability network_peer_controls=1 Mar 10 01:30:43.065934 kernel: SELinux: policy capability open_perms=1 Mar 10 01:30:43.065962 kernel: SELinux: policy capability extended_socket_class=1 Mar 10 01:30:43.066080 kernel: SELinux: policy capability always_check_network=0 Mar 10 01:30:43.066098 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 10 01:30:43.066115 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 10 01:30:43.066212 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 10 01:30:43.066245 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 10 01:30:43.066267 kernel: SELinux: policy capability userspace_initial_context=0 Mar 10 01:30:43.066290 kernel: audit: type=1403 audit(1773106239.422:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 10 01:30:43.066313 systemd[1]: Successfully loaded SELinux policy in 191.874ms. Mar 10 01:30:43.066342 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 19.590ms. Mar 10 01:30:43.066361 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 10 01:30:43.066378 systemd[1]: Detected virtualization kvm. Mar 10 01:30:43.066394 systemd[1]: Detected architecture x86-64. Mar 10 01:30:43.066410 systemd[1]: Detected first boot. Mar 10 01:30:43.066427 systemd[1]: Initializing machine ID from VM UUID. Mar 10 01:30:43.066443 zram_generator::config[1113]: No configuration found. Mar 10 01:30:43.066461 kernel: Guest personality initialized and is inactive Mar 10 01:30:43.066524 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 10 01:30:43.066542 kernel: Initialized host personality Mar 10 01:30:43.066558 kernel: NET: Registered PF_VSOCK protocol family Mar 10 01:30:43.066574 systemd[1]: Populated /etc with preset unit settings. Mar 10 01:30:43.066593 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 10 01:30:43.066658 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 10 01:30:43.066677 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 10 01:30:43.066695 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 10 01:30:43.066714 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 10 01:30:43.066786 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 10 01:30:43.066804 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 10 01:30:43.066822 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 10 01:30:43.066840 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 10 01:30:43.066858 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 10 01:30:43.066876 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 10 01:30:43.066893 systemd[1]: Created slice user.slice - User and Session Slice. Mar 10 01:30:43.066910 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 10 01:30:43.066931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 10 01:30:43.066949 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 10 01:30:43.067292 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 10 01:30:43.067321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 10 01:30:43.067341 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 10 01:30:43.067358 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 10 01:30:43.067375 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 10 01:30:43.067392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 10 01:30:43.067457 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 10 01:30:43.067765 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 10 01:30:43.067785 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 10 01:30:43.067802 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 10 01:30:43.067819 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 10 01:30:43.067836 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 10 01:30:43.067852 systemd[1]: Reached target slices.target - Slice Units. Mar 10 01:30:43.067869 systemd[1]: Reached target swap.target - Swaps. Mar 10 01:30:43.067886 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 10 01:30:43.067944 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 10 01:30:43.067964 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 10 01:30:43.068036 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 10 01:30:43.068054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 10 01:30:43.068071 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 10 01:30:43.068088 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 10 01:30:43.068104 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 10 01:30:43.068121 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 10 01:30:43.068223 systemd[1]: Mounting media.mount - External Media Directory... Mar 10 01:30:43.068294 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:43.068312 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 10 01:30:43.068328 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 10 01:30:43.068343 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 10 01:30:43.068415 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 10 01:30:43.068432 systemd[1]: Reached target machines.target - Containers. Mar 10 01:30:43.068446 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 10 01:30:43.068461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:30:43.068482 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 10 01:30:43.068501 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 10 01:30:43.068517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:30:43.068532 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:30:43.068548 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:30:43.068563 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 10 01:30:43.068580 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:30:43.068597 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 10 01:30:43.068612 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 10 01:30:43.068685 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 10 01:30:43.068703 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 10 01:30:43.068718 systemd[1]: Stopped systemd-fsck-usr.service. Mar 10 01:30:43.068734 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 01:30:43.068750 kernel: ACPI: bus type drm_connector registered Mar 10 01:30:43.068768 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 10 01:30:43.068784 kernel: fuse: init (API version 7.41) Mar 10 01:30:43.068798 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 10 01:30:43.068868 kernel: loop: module loaded Mar 10 01:30:43.068933 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 10 01:30:43.068953 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 10 01:30:43.069055 systemd-journald[1198]: Collecting audit messages is disabled. Mar 10 01:30:43.069096 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 10 01:30:43.069116 systemd-journald[1198]: Journal started Mar 10 01:30:43.069219 systemd-journald[1198]: Runtime Journal (/run/log/journal/b5ed4fc6b85540319024dd06b2e89cb5) is 6M, max 48.3M, 42.2M free. Mar 10 01:30:41.839373 systemd[1]: Queued start job for default target multi-user.target. Mar 10 01:30:41.869699 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 10 01:30:41.871066 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 10 01:30:41.871741 systemd[1]: systemd-journald.service: Consumed 2.065s CPU time. Mar 10 01:30:43.096871 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 10 01:30:43.108490 systemd[1]: verity-setup.service: Deactivated successfully. Mar 10 01:30:43.108593 systemd[1]: Stopped verity-setup.service. Mar 10 01:30:43.126588 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:43.137204 systemd[1]: Started systemd-journald.service - Journal Service. Mar 10 01:30:43.143055 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 10 01:30:43.149048 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 10 01:30:43.155940 systemd[1]: Mounted media.mount - External Media Directory. Mar 10 01:30:43.161311 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 10 01:30:43.166763 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 10 01:30:43.172218 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 10 01:30:43.177838 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 10 01:30:43.190702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 10 01:30:43.197348 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 10 01:30:43.197872 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 10 01:30:43.205604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:30:43.206073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:30:43.214567 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:30:43.215361 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:30:43.221738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:30:43.222238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:30:43.230221 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 10 01:30:43.230752 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 10 01:30:43.237511 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:30:43.238056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:30:43.244795 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 10 01:30:43.250585 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 10 01:30:43.257271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 10 01:30:43.265061 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 10 01:30:43.291537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 10 01:30:43.307741 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 10 01:30:43.316098 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 10 01:30:43.323351 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 10 01:30:43.328561 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 10 01:30:43.328637 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 10 01:30:43.338217 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 10 01:30:43.350342 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 10 01:30:43.356710 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:30:43.358846 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 10 01:30:43.376464 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 10 01:30:43.386346 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:30:43.396068 systemd-journald[1198]: Time spent on flushing to /var/log/journal/b5ed4fc6b85540319024dd06b2e89cb5 is 20.751ms for 969 entries. Mar 10 01:30:43.396068 systemd-journald[1198]: System Journal (/var/log/journal/b5ed4fc6b85540319024dd06b2e89cb5) is 8M, max 195.6M, 187.6M free. Mar 10 01:30:43.447030 systemd-journald[1198]: Received client request to flush runtime journal. Mar 10 01:30:43.389344 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 10 01:30:43.396210 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:30:43.400620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 10 01:30:43.415388 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 10 01:30:43.422443 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 10 01:30:43.439961 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 10 01:30:43.447733 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 10 01:30:43.456954 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 10 01:30:43.467541 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 10 01:30:43.487614 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 10 01:30:43.491620 kernel: loop0: detected capacity change from 0 to 110984 Mar 10 01:30:43.504444 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 10 01:30:43.514648 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 10 01:30:43.554959 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 10 01:30:43.570550 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 10 01:30:43.601951 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 10 01:30:43.629864 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 10 01:30:43.652350 kernel: loop1: detected capacity change from 0 to 128560 Mar 10 01:30:43.666056 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Mar 10 01:30:43.666085 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Mar 10 01:30:43.698839 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 10 01:30:43.780288 kernel: loop2: detected capacity change from 0 to 217752 Mar 10 01:30:44.058315 kernel: loop3: detected capacity change from 0 to 110984 Mar 10 01:30:44.130325 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 10 01:30:44.214355 kernel: loop4: detected capacity change from 0 to 128560 Mar 10 01:30:44.326244 kernel: loop5: detected capacity change from 0 to 217752 Mar 10 01:30:44.470742 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 10 01:30:44.473892 (sd-merge)[1256]: Merged extensions into '/usr'. Mar 10 01:30:44.516254 systemd[1]: Reload requested from client PID 1233 ('systemd-sysext') (unit systemd-sysext.service)... Mar 10 01:30:44.516543 systemd[1]: Reloading... Mar 10 01:30:44.735268 zram_generator::config[1278]: No configuration found. Mar 10 01:30:45.359314 systemd[1]: Reloading finished in 841 ms. Mar 10 01:30:45.410613 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 10 01:30:45.500763 systemd[1]: Starting ensure-sysext.service... Mar 10 01:30:45.508352 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 10 01:30:45.511313 ldconfig[1228]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 10 01:30:45.533826 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 10 01:30:45.557910 systemd[1]: Reload requested from client PID 1318 ('systemctl') (unit ensure-sysext.service)... Mar 10 01:30:45.558206 systemd[1]: Reloading... Mar 10 01:30:45.570437 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 10 01:30:45.570963 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 10 01:30:45.572520 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 10 01:30:45.573049 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 10 01:30:45.576511 systemd-tmpfiles[1319]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 10 01:30:45.577091 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Mar 10 01:30:45.577887 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Mar 10 01:30:45.597790 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:30:45.597802 systemd-tmpfiles[1319]: Skipping /boot Mar 10 01:30:45.623550 systemd-tmpfiles[1319]: Detected autofs mount point /boot during canonicalization of boot. Mar 10 01:30:45.623570 systemd-tmpfiles[1319]: Skipping /boot Mar 10 01:30:45.758314 zram_generator::config[1344]: No configuration found. Mar 10 01:30:46.148786 systemd[1]: Reloading finished in 590 ms. Mar 10 01:30:46.165261 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 10 01:30:46.277650 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 10 01:30:46.315783 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 01:30:46.329746 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 10 01:30:46.346247 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 10 01:30:46.364396 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 10 01:30:46.386312 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 10 01:30:46.405629 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 10 01:30:46.428762 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:46.429354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:30:46.440119 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:30:46.475538 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:30:46.504052 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:30:46.519526 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:30:46.519847 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 01:30:46.540471 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 10 01:30:46.550462 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:46.616553 augenrules[1415]: No rules Mar 10 01:30:46.624859 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:30:46.625525 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 01:30:46.633576 systemd-udevd[1394]: Using default interface naming scheme 'v255'. Mar 10 01:30:46.647538 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 10 01:30:46.655609 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:30:46.656199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:30:46.662353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:30:46.662659 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:30:46.672404 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:30:46.672785 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:30:46.705085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:46.705841 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:30:46.708267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:30:46.715118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:30:46.725891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:30:46.731973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:30:46.732258 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 01:30:46.743715 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 10 01:30:46.750559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:46.755585 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 10 01:30:46.767224 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 10 01:30:46.775528 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 10 01:30:46.787496 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:30:46.788662 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:30:46.795457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:30:46.795842 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:30:46.802947 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 10 01:30:46.813977 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:30:46.814487 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:30:46.829868 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 10 01:30:46.879759 systemd[1]: Finished ensure-sysext.service. Mar 10 01:30:46.901761 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:46.904491 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 01:30:46.911203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 10 01:30:46.915772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 10 01:30:46.926350 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 10 01:30:46.951736 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 10 01:30:47.020852 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 10 01:30:47.028661 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 10 01:30:47.028722 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 10 01:30:47.041413 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 10 01:30:47.053367 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 10 01:30:47.070414 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 10 01:30:47.070518 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 10 01:30:47.086456 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 10 01:30:47.086984 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 10 01:30:47.118489 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 10 01:30:47.118890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 10 01:30:47.126918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 10 01:30:47.127317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 10 01:30:47.136853 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 10 01:30:47.137335 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 10 01:30:47.174121 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 10 01:30:47.174417 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 10 01:30:47.175694 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 10 01:30:47.195343 augenrules[1468]: /sbin/augenrules: No change Mar 10 01:30:47.227790 augenrules[1505]: No rules Mar 10 01:30:47.228715 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:30:47.229367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 01:30:47.264672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 10 01:30:47.276208 kernel: mousedev: PS/2 mouse device common for all mice Mar 10 01:30:47.277339 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 10 01:30:47.326238 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 10 01:30:47.338253 kernel: ACPI: button: Power Button [PWRF] Mar 10 01:30:47.339723 systemd-resolved[1390]: Positive Trust Anchors: Mar 10 01:30:47.339735 systemd-resolved[1390]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 10 01:30:47.339779 systemd-resolved[1390]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 10 01:30:47.341560 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 10 01:30:47.355990 systemd-resolved[1390]: Defaulting to hostname 'linux'. Mar 10 01:30:47.358777 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 10 01:30:47.364809 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 10 01:30:47.399207 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 10 01:30:47.399643 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 10 01:30:47.398624 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 10 01:30:47.410303 systemd[1]: Reached target sysinit.target - System Initialization. Mar 10 01:30:47.416229 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 10 01:30:47.419639 systemd-networkd[1474]: lo: Link UP Mar 10 01:30:47.419644 systemd-networkd[1474]: lo: Gained carrier Mar 10 01:30:47.422413 systemd-networkd[1474]: Enumeration completed Mar 10 01:30:47.422756 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 10 01:30:47.423879 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:30:47.423887 systemd-networkd[1474]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 10 01:30:47.425787 systemd-networkd[1474]: eth0: Link UP Mar 10 01:30:47.426516 systemd-networkd[1474]: eth0: Gained carrier Mar 10 01:30:47.426532 systemd-networkd[1474]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 10 01:30:47.430067 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 10 01:30:47.436424 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 10 01:30:47.442620 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 10 01:30:47.442662 systemd[1]: Reached target paths.target - Path Units. Mar 10 01:30:47.448592 systemd[1]: Reached target time-set.target - System Time Set. Mar 10 01:30:47.454683 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 10 01:30:47.460567 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 10 01:30:47.465906 systemd[1]: Reached target timers.target - Timer Units. Mar 10 01:30:47.466911 systemd-networkd[1474]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 10 01:30:47.470362 systemd-timesyncd[1475]: Network configuration changed, trying to establish connection. Mar 10 01:30:47.472875 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 10 01:30:48.361339 systemd-timesyncd[1475]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 10 01:30:48.361400 systemd-timesyncd[1475]: Initial clock synchronization to Tue 2026-03-10 01:30:48.361252 UTC. Mar 10 01:30:48.365831 systemd-resolved[1390]: Clock change detected. Flushing caches. Mar 10 01:30:48.368039 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 10 01:30:48.381289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 10 01:30:48.388082 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 10 01:30:48.393804 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 10 01:30:48.419687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 10 01:30:48.425224 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 10 01:30:48.431524 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 10 01:30:48.437203 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 10 01:30:48.448911 systemd[1]: Reached target network.target - Network. Mar 10 01:30:48.454843 systemd[1]: Reached target sockets.target - Socket Units. Mar 10 01:30:48.463101 systemd[1]: Reached target basic.target - Basic System. Mar 10 01:30:48.473829 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:30:48.473954 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 10 01:30:48.479142 systemd[1]: Starting containerd.service - containerd container runtime... Mar 10 01:30:48.493281 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 10 01:30:48.509261 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 10 01:30:48.523711 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 10 01:30:48.534482 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 10 01:30:48.544873 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 10 01:30:48.551276 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 10 01:30:48.572781 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 10 01:30:48.583058 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 10 01:30:48.595044 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 10 01:30:48.606250 jq[1536]: false Mar 10 01:30:48.607137 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 10 01:30:48.608530 oslogin_cache_refresh[1538]: Refreshing passwd entry cache Mar 10 01:30:48.615264 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing passwd entry cache Mar 10 01:30:48.632168 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 10 01:30:48.662820 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 10 01:30:48.667928 oslogin_cache_refresh[1538]: Failure getting users, quitting Mar 10 01:30:48.671007 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting users, quitting Mar 10 01:30:48.671007 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 10 01:30:48.671007 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Refreshing group entry cache Mar 10 01:30:48.667954 oslogin_cache_refresh[1538]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 10 01:30:48.668022 oslogin_cache_refresh[1538]: Refreshing group entry cache Mar 10 01:30:48.680379 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 10 01:30:48.689067 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 10 01:30:48.691168 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 10 01:30:48.692768 systemd[1]: Starting update-engine.service - Update Engine... Mar 10 01:30:48.697303 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Failure getting groups, quitting Mar 10 01:30:48.697303 google_oslogin_nss_cache[1538]: oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 10 01:30:48.697532 extend-filesystems[1537]: Found /dev/vda6 Mar 10 01:30:48.696800 oslogin_cache_refresh[1538]: Failure getting groups, quitting Mar 10 01:30:48.702409 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 10 01:30:48.696819 oslogin_cache_refresh[1538]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 10 01:30:48.720115 extend-filesystems[1537]: Found /dev/vda9 Mar 10 01:30:48.724993 extend-filesystems[1537]: Checking size of /dev/vda9 Mar 10 01:30:48.724945 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 10 01:30:48.731258 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 10 01:30:48.731735 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 10 01:30:48.732328 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 10 01:30:48.733521 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 10 01:30:48.754247 extend-filesystems[1537]: Resized partition /dev/vda9 Mar 10 01:30:48.754304 systemd[1]: motdgen.service: Deactivated successfully. Mar 10 01:30:48.754832 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 10 01:30:48.767128 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Mar 10 01:30:48.765426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 10 01:30:48.765976 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 10 01:30:48.781324 jq[1559]: true Mar 10 01:30:48.809540 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 10 01:30:48.809702 tar[1564]: linux-amd64/LICENSE Mar 10 01:30:48.809702 tar[1564]: linux-amd64/helm Mar 10 01:30:48.810059 jq[1575]: true Mar 10 01:30:48.811993 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 10 01:30:48.820656 update_engine[1556]: I20260310 01:30:48.819530 1556 main.cc:92] Flatcar Update Engine starting Mar 10 01:30:48.830679 (ntainerd)[1568]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 10 01:30:48.840896 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 10 01:30:48.882025 dbus-daemon[1534]: [system] SELinux support is enabled Mar 10 01:30:48.882268 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 10 01:30:48.890698 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 10 01:30:48.890775 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 10 01:30:48.904127 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 10 01:30:48.904202 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 10 01:30:48.942026 systemd[1]: Started update-engine.service - Update Engine. Mar 10 01:30:48.943178 update_engine[1556]: I20260310 01:30:48.942301 1556 update_check_scheduler.cc:74] Next update check in 2m39s Mar 10 01:30:48.945484 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 10 01:30:48.957808 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 10 01:30:48.983508 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 10 01:30:48.983508 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 10 01:30:48.983508 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 10 01:30:49.023688 extend-filesystems[1537]: Resized filesystem in /dev/vda9 Mar 10 01:30:49.030660 bash[1601]: Updated "/home/core/.ssh/authorized_keys" Mar 10 01:30:48.990559 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 10 01:30:48.991013 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 10 01:30:49.062346 systemd-logind[1543]: Watching system buttons on /dev/input/event2 (Power Button) Mar 10 01:30:49.062423 systemd-logind[1543]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 10 01:30:49.067641 systemd-logind[1543]: New seat seat0. Mar 10 01:30:49.193637 kernel: kvm_amd: TSC scaling supported Mar 10 01:30:49.193737 kernel: kvm_amd: Nested Virtualization enabled Mar 10 01:30:49.193759 kernel: kvm_amd: Nested Paging enabled Mar 10 01:30:49.193800 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 10 01:30:49.193818 kernel: kvm_amd: PMU virtualization is disabled Mar 10 01:30:49.292026 locksmithd[1602]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 10 01:30:49.320675 containerd[1568]: time="2026-03-10T01:30:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 10 01:30:49.321423 containerd[1568]: time="2026-03-10T01:30:49.321257540Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.346795146Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.607µs" Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.346832295Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.346853596Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.347060462Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.347078335Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.347107269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.347186337Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 10 01:30:49.347637 containerd[1568]: time="2026-03-10T01:30:49.347201235Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 01:30:49.348523 containerd[1568]: time="2026-03-10T01:30:49.347546960Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 10 01:30:49.348523 containerd[1568]: time="2026-03-10T01:30:49.348498616Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 01:30:49.348691 containerd[1568]: time="2026-03-10T01:30:49.348522250Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 10 01:30:49.348691 containerd[1568]: time="2026-03-10T01:30:49.348535795Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 10 01:30:49.348752 containerd[1568]: time="2026-03-10T01:30:49.348705492Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 10 01:30:49.349026 containerd[1568]: time="2026-03-10T01:30:49.348972470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 01:30:49.349091 containerd[1568]: time="2026-03-10T01:30:49.349046599Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 10 01:30:49.349091 containerd[1568]: time="2026-03-10T01:30:49.349085050Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 10 01:30:49.349741 containerd[1568]: time="2026-03-10T01:30:49.349668770Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 10 01:30:49.351627 containerd[1568]: time="2026-03-10T01:30:49.350749057Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 10 01:30:49.351627 containerd[1568]: time="2026-03-10T01:30:49.350839195Z" level=info msg="metadata content store policy set" policy=shared Mar 10 01:30:49.357834 containerd[1568]: time="2026-03-10T01:30:49.357812153Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 10 01:30:49.358020 containerd[1568]: time="2026-03-10T01:30:49.357938819Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358121390Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358158579Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358177184Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358193495Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358210586Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358231105Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358244029Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358261311Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358273233Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358287671Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 10 01:30:49.358542 containerd[1568]: time="2026-03-10T01:30:49.358418144Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 10 01:30:49.358979 containerd[1568]: time="2026-03-10T01:30:49.358954846Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 10 01:30:49.359070 containerd[1568]: time="2026-03-10T01:30:49.359050484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 10 01:30:49.359144 containerd[1568]: time="2026-03-10T01:30:49.359124783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 10 01:30:49.359219 containerd[1568]: time="2026-03-10T01:30:49.359198991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 10 01:30:49.359303 containerd[1568]: time="2026-03-10T01:30:49.359288058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 10 01:30:49.359353 containerd[1568]: time="2026-03-10T01:30:49.359341928Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 10 01:30:49.359404 containerd[1568]: time="2026-03-10T01:30:49.359392001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 10 01:30:49.359682 containerd[1568]: time="2026-03-10T01:30:49.359656375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 10 01:30:49.359789 containerd[1568]: time="2026-03-10T01:30:49.359767222Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 10 01:30:49.359865 containerd[1568]: time="2026-03-10T01:30:49.359849375Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 10 01:30:49.359974 containerd[1568]: time="2026-03-10T01:30:49.359958760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 10 01:30:49.360047 containerd[1568]: time="2026-03-10T01:30:49.360029972Z" level=info msg="Start snapshots syncer" Mar 10 01:30:49.360403 containerd[1568]: time="2026-03-10T01:30:49.360225928Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 10 01:30:49.362342 containerd[1568]: time="2026-03-10T01:30:49.361965628Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.363744966Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.363814736Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364054955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364081264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364093767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364106231Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364118894Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364131929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364146716Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364174679Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364188715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364201830Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364237165Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 01:30:49.365664 containerd[1568]: time="2026-03-10T01:30:49.364251643Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364262042Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364273293Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364283041Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364294783Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364320371Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364339145Z" level=info msg="runtime interface created" Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364346069Z" level=info msg="created NRI interface" Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364356007Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364372268Z" level=info msg="Connect containerd service" Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.364394389Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 10 01:30:49.366059 containerd[1568]: time="2026-03-10T01:30:49.365400166Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 10 01:30:49.378672 kernel: EDAC MC: Ver: 3.0.0 Mar 10 01:30:49.500686 containerd[1568]: time="2026-03-10T01:30:49.500520798Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 10 01:30:49.501038 containerd[1568]: time="2026-03-10T01:30:49.500856846Z" level=info msg="Start subscribing containerd event" Mar 10 01:30:49.501126 containerd[1568]: time="2026-03-10T01:30:49.501114195Z" level=info msg="Start recovering state" Mar 10 01:30:49.501259 containerd[1568]: time="2026-03-10T01:30:49.501239900Z" level=info msg="Start event monitor" Mar 10 01:30:49.501372 containerd[1568]: time="2026-03-10T01:30:49.501353753Z" level=info msg="Start cni network conf syncer for default" Mar 10 01:30:49.501497 containerd[1568]: time="2026-03-10T01:30:49.501480620Z" level=info msg="Start streaming server" Mar 10 01:30:49.501547 containerd[1568]: time="2026-03-10T01:30:49.501537245Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 10 01:30:49.501791 containerd[1568]: time="2026-03-10T01:30:49.501776792Z" level=info msg="runtime interface starting up..." Mar 10 01:30:49.501849 containerd[1568]: time="2026-03-10T01:30:49.501832366Z" level=info msg="starting plugins..." Mar 10 01:30:49.501926 containerd[1568]: time="2026-03-10T01:30:49.501909029Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 10 01:30:49.502406 containerd[1568]: time="2026-03-10T01:30:49.502387713Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 10 01:30:49.503868 containerd[1568]: time="2026-03-10T01:30:49.503847257Z" level=info msg="containerd successfully booted in 0.185223s" Mar 10 01:30:49.603539 sshd_keygen[1557]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 10 01:30:49.647727 tar[1564]: linux-amd64/README.md Mar 10 01:30:49.853862 systemd-networkd[1474]: eth0: Gained IPv6LL Mar 10 01:30:50.470780 systemd[1]: Started systemd-logind.service - User Login Management. Mar 10 01:30:50.477737 systemd[1]: Started containerd.service - containerd container runtime. Mar 10 01:30:50.484908 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 10 01:30:50.491330 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 10 01:30:50.498488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 10 01:30:50.505212 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 10 01:30:50.523426 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 10 01:30:50.581839 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 10 01:30:50.592393 systemd[1]: Reached target network-online.target - Network is Online. Mar 10 01:30:50.598925 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 10 01:30:50.605008 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 10 01:30:50.613081 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:30:50.625395 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 10 01:30:50.632293 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:42236.service - OpenSSH per-connection server daemon (10.0.0.1:42236). Mar 10 01:30:50.641116 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 10 01:30:50.652858 systemd[1]: issuegen.service: Deactivated successfully. Mar 10 01:30:50.653308 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 10 01:30:50.680936 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 10 01:30:50.697699 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 10 01:30:50.711037 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 10 01:30:50.711551 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 10 01:30:50.718131 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 10 01:30:50.721128 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 10 01:30:50.731111 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 10 01:30:50.740158 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 10 01:30:50.745332 systemd[1]: Reached target getty.target - Login Prompts. Mar 10 01:30:50.780872 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 42236 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:30:50.784810 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:30:50.797499 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 10 01:30:50.804833 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 10 01:30:50.825076 systemd-logind[1543]: New session 1 of user core. Mar 10 01:30:50.840828 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 10 01:30:50.851937 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 10 01:30:50.876099 (systemd)[1680]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 10 01:30:50.885151 systemd-logind[1543]: New session c1 of user core. Mar 10 01:30:51.096348 systemd[1680]: Queued start job for default target default.target. Mar 10 01:30:51.108356 systemd[1680]: Created slice app.slice - User Application Slice. Mar 10 01:30:51.108396 systemd[1680]: Reached target paths.target - Paths. Mar 10 01:30:51.108521 systemd[1680]: Reached target timers.target - Timers. Mar 10 01:30:51.112085 systemd[1680]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 10 01:30:51.131505 systemd[1680]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 10 01:30:51.131801 systemd[1680]: Reached target sockets.target - Sockets. Mar 10 01:30:51.131908 systemd[1680]: Reached target basic.target - Basic System. Mar 10 01:30:51.132004 systemd[1680]: Reached target default.target - Main User Target. Mar 10 01:30:51.132052 systemd[1680]: Startup finished in 235ms. Mar 10 01:30:51.132115 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 10 01:30:51.139913 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 10 01:30:51.178749 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:42238.service - OpenSSH per-connection server daemon (10.0.0.1:42238). Mar 10 01:30:51.261764 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 42238 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:30:51.264382 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:30:51.276068 systemd-logind[1543]: New session 2 of user core. Mar 10 01:30:51.284978 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 10 01:30:51.315836 sshd[1694]: Connection closed by 10.0.0.1 port 42238 Mar 10 01:30:51.317172 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Mar 10 01:30:51.333500 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:42238.service: Deactivated successfully. Mar 10 01:30:51.336491 systemd[1]: session-2.scope: Deactivated successfully. Mar 10 01:30:51.339020 systemd-logind[1543]: Session 2 logged out. Waiting for processes to exit. Mar 10 01:30:51.342413 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:42246.service - OpenSSH per-connection server daemon (10.0.0.1:42246). Mar 10 01:30:51.350027 systemd-logind[1543]: Removed session 2. Mar 10 01:30:51.410325 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 42246 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:30:51.413022 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:30:51.421643 systemd-logind[1543]: New session 3 of user core. Mar 10 01:30:51.432873 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 10 01:30:51.462356 sshd[1703]: Connection closed by 10.0.0.1 port 42246 Mar 10 01:30:51.463180 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Mar 10 01:30:51.469717 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:42246.service: Deactivated successfully. Mar 10 01:30:51.474769 systemd[1]: session-3.scope: Deactivated successfully. Mar 10 01:30:51.478766 systemd-logind[1543]: Session 3 logged out. Waiting for processes to exit. Mar 10 01:30:51.481667 systemd-logind[1543]: Removed session 3. Mar 10 01:30:51.898232 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:30:51.904176 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 10 01:30:51.909658 systemd[1]: Startup finished in 9.202s (kernel) + 20.502s (initrd) + 11.784s (userspace) = 41.490s. Mar 10 01:30:51.915200 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:30:52.547272 kubelet[1713]: E0310 01:30:52.547029 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:30:52.551524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:30:52.551917 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:30:52.552657 systemd[1]: kubelet.service: Consumed 1.141s CPU time, 254.9M memory peak. Mar 10 01:31:01.502650 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:47682.service - OpenSSH per-connection server daemon (10.0.0.1:47682). Mar 10 01:31:01.646751 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 47682 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:31:01.648760 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:31:01.664955 systemd-logind[1543]: New session 4 of user core. Mar 10 01:31:01.673976 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 10 01:31:01.729263 sshd[1729]: Connection closed by 10.0.0.1 port 47682 Mar 10 01:31:01.730354 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Mar 10 01:31:01.760282 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:47682.service: Deactivated successfully. Mar 10 01:31:01.765238 systemd[1]: session-4.scope: Deactivated successfully. Mar 10 01:31:01.772481 systemd-logind[1543]: Session 4 logged out. Waiting for processes to exit. Mar 10 01:31:01.775028 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:47692.service - OpenSSH per-connection server daemon (10.0.0.1:47692). Mar 10 01:31:01.782709 systemd-logind[1543]: Removed session 4. Mar 10 01:31:01.852863 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 47692 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:31:01.855877 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:31:01.873286 systemd-logind[1543]: New session 5 of user core. Mar 10 01:31:01.879404 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 10 01:31:01.895312 sshd[1738]: Connection closed by 10.0.0.1 port 47692 Mar 10 01:31:01.895844 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Mar 10 01:31:01.932022 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:47692.service: Deactivated successfully. Mar 10 01:31:01.934965 systemd[1]: session-5.scope: Deactivated successfully. Mar 10 01:31:01.944929 systemd-logind[1543]: Session 5 logged out. Waiting for processes to exit. Mar 10 01:31:01.947875 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:47708.service - OpenSSH per-connection server daemon (10.0.0.1:47708). Mar 10 01:31:01.952865 systemd-logind[1543]: Removed session 5. Mar 10 01:31:02.048752 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 47708 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:31:02.054098 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:31:02.072696 systemd-logind[1543]: New session 6 of user core. Mar 10 01:31:02.079022 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 10 01:31:02.109047 sshd[1747]: Connection closed by 10.0.0.1 port 47708 Mar 10 01:31:02.110403 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Mar 10 01:31:02.121445 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:47708.service: Deactivated successfully. Mar 10 01:31:02.124110 systemd[1]: session-6.scope: Deactivated successfully. Mar 10 01:31:02.125661 systemd-logind[1543]: Session 6 logged out. Waiting for processes to exit. Mar 10 01:31:02.128293 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:47720.service - OpenSSH per-connection server daemon (10.0.0.1:47720). Mar 10 01:31:02.131477 systemd-logind[1543]: Removed session 6. Mar 10 01:31:02.213456 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 47720 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:31:02.217051 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:31:02.230448 systemd-logind[1543]: New session 7 of user core. Mar 10 01:31:02.240079 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 10 01:31:02.273707 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 10 01:31:02.274160 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:31:02.298334 sudo[1757]: pam_unix(sudo:session): session closed for user root Mar 10 01:31:02.302464 sshd[1756]: Connection closed by 10.0.0.1 port 47720 Mar 10 01:31:02.303417 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Mar 10 01:31:02.321327 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:47720.service: Deactivated successfully. Mar 10 01:31:02.326309 systemd[1]: session-7.scope: Deactivated successfully. Mar 10 01:31:02.328145 systemd-logind[1543]: Session 7 logged out. Waiting for processes to exit. Mar 10 01:31:02.332759 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:47722.service - OpenSSH per-connection server daemon (10.0.0.1:47722). Mar 10 01:31:02.335187 systemd-logind[1543]: Removed session 7. Mar 10 01:31:02.417781 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 47722 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:31:02.420231 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:31:02.435102 systemd-logind[1543]: New session 8 of user core. Mar 10 01:31:02.445844 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 10 01:31:02.473844 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 10 01:31:02.474409 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:31:02.485051 sudo[1768]: pam_unix(sudo:session): session closed for user root Mar 10 01:31:02.494435 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 10 01:31:02.494996 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:31:02.511240 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 10 01:31:02.587814 augenrules[1790]: No rules Mar 10 01:31:02.590458 systemd[1]: audit-rules.service: Deactivated successfully. Mar 10 01:31:02.591072 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 10 01:31:02.594850 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 10 01:31:02.594976 sudo[1767]: pam_unix(sudo:session): session closed for user root Mar 10 01:31:02.598487 sshd[1766]: Connection closed by 10.0.0.1 port 47722 Mar 10 01:31:02.599234 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Mar 10 01:31:02.599763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:02.615804 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:47722.service: Deactivated successfully. Mar 10 01:31:02.619995 systemd[1]: session-8.scope: Deactivated successfully. Mar 10 01:31:02.621668 systemd-logind[1543]: Session 8 logged out. Waiting for processes to exit. Mar 10 01:31:02.630482 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:47728.service - OpenSSH per-connection server daemon (10.0.0.1:47728). Mar 10 01:31:02.633450 systemd-logind[1543]: Removed session 8. Mar 10 01:31:02.714387 sshd[1802]: Accepted publickey for core from 10.0.0.1 port 47728 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:31:02.718703 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:31:02.733084 systemd-logind[1543]: New session 9 of user core. Mar 10 01:31:02.742819 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 10 01:31:02.771883 sudo[1806]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 10 01:31:02.772320 sudo[1806]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 10 01:31:02.858334 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:02.880317 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:31:02.994448 kubelet[1821]: E0310 01:31:02.994111 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:31:03.002465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:31:03.002915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:31:03.003933 systemd[1]: kubelet.service: Consumed 320ms CPU time, 109.6M memory peak. Mar 10 01:31:03.345328 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 10 01:31:03.367708 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 10 01:31:04.563478 dockerd[1840]: time="2026-03-10T01:31:04.562129034Z" level=info msg="Starting up" Mar 10 01:31:04.584251 dockerd[1840]: time="2026-03-10T01:31:04.583808320Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 10 01:31:04.684190 dockerd[1840]: time="2026-03-10T01:31:04.683424671Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 10 01:31:05.061388 dockerd[1840]: time="2026-03-10T01:31:05.055485950Z" level=info msg="Loading containers: start." Mar 10 01:31:05.118154 kernel: Initializing XFRM netlink socket Mar 10 01:31:08.032178 systemd-networkd[1474]: docker0: Link UP Mar 10 01:31:08.063241 dockerd[1840]: time="2026-03-10T01:31:08.062784027Z" level=info msg="Loading containers: done." Mar 10 01:31:11.851075 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 3132215900 wd_nsec: 3132214424 Mar 10 01:31:11.955033 dockerd[1840]: time="2026-03-10T01:31:11.954163171Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 10 01:31:11.955033 dockerd[1840]: time="2026-03-10T01:31:11.954380827Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 10 01:31:11.955033 dockerd[1840]: time="2026-03-10T01:31:11.954528152Z" level=info msg="Initializing buildkit" Mar 10 01:31:12.114458 dockerd[1840]: time="2026-03-10T01:31:12.114079815Z" level=info msg="Completed buildkit initialization" Mar 10 01:31:12.131694 dockerd[1840]: time="2026-03-10T01:31:12.130727524Z" level=info msg="Daemon has completed initialization" Mar 10 01:31:12.131694 dockerd[1840]: time="2026-03-10T01:31:12.131021022Z" level=info msg="API listen on /run/docker.sock" Mar 10 01:31:12.131279 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 10 01:31:13.232796 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 10 01:31:13.242746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:13.373357 containerd[1568]: time="2026-03-10T01:31:13.372942819Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 10 01:31:13.632077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:13.657311 (kubelet)[2069]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:31:13.815164 kubelet[2069]: E0310 01:31:13.814967 2069 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:31:13.825406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:31:13.825879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:31:13.826887 systemd[1]: kubelet.service: Consumed 397ms CPU time, 109.3M memory peak. Mar 10 01:31:14.251878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount723492887.mount: Deactivated successfully. Mar 10 01:31:17.310234 containerd[1568]: time="2026-03-10T01:31:17.310050631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:17.313914 containerd[1568]: time="2026-03-10T01:31:17.313318932Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 10 01:31:17.317553 containerd[1568]: time="2026-03-10T01:31:17.317393717Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:17.323751 containerd[1568]: time="2026-03-10T01:31:17.323058771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:17.325888 containerd[1568]: time="2026-03-10T01:31:17.324935206Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 3.951815395s" Mar 10 01:31:17.325888 containerd[1568]: time="2026-03-10T01:31:17.325347565Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 10 01:31:17.329063 containerd[1568]: time="2026-03-10T01:31:17.328216458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 10 01:31:23.442227 containerd[1568]: time="2026-03-10T01:31:23.441157762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:23.444736 containerd[1568]: time="2026-03-10T01:31:23.443832075Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 10 01:31:23.447291 containerd[1568]: time="2026-03-10T01:31:23.446470582Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:23.451835 containerd[1568]: time="2026-03-10T01:31:23.451347518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:23.452091 containerd[1568]: time="2026-03-10T01:31:23.451961196Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 6.123667946s" Mar 10 01:31:23.452091 containerd[1568]: time="2026-03-10T01:31:23.452044859Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 10 01:31:23.458019 containerd[1568]: time="2026-03-10T01:31:23.457869593Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 10 01:31:24.006323 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 10 01:31:24.042698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:24.981533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:25.049046 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:31:25.506648 kubelet[2150]: E0310 01:31:25.506411 2150 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:31:25.528049 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:31:25.528674 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:31:25.529350 systemd[1]: kubelet.service: Consumed 948ms CPU time, 112.6M memory peak. Mar 10 01:31:28.133353 containerd[1568]: time="2026-03-10T01:31:28.132676799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:28.135171 containerd[1568]: time="2026-03-10T01:31:28.135036980Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 10 01:31:28.139812 containerd[1568]: time="2026-03-10T01:31:28.139111623Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:28.149147 containerd[1568]: time="2026-03-10T01:31:28.148770811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:28.154320 containerd[1568]: time="2026-03-10T01:31:28.154121574Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 4.696133435s" Mar 10 01:31:28.154320 containerd[1568]: time="2026-03-10T01:31:28.154247645Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 10 01:31:28.156930 containerd[1568]: time="2026-03-10T01:31:28.156714955Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 10 01:31:33.261088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788192987.mount: Deactivated successfully. Mar 10 01:31:34.463058 update_engine[1556]: I20260310 01:31:34.454924 1556 update_attempter.cc:509] Updating boot flags... Mar 10 01:31:35.710709 containerd[1568]: time="2026-03-10T01:31:35.709143429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:35.712872 containerd[1568]: time="2026-03-10T01:31:35.712788915Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 10 01:31:35.717273 containerd[1568]: time="2026-03-10T01:31:35.717011434Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:35.725336 containerd[1568]: time="2026-03-10T01:31:35.723136467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:35.725336 containerd[1568]: time="2026-03-10T01:31:35.723799019Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 7.566888446s" Mar 10 01:31:35.725336 containerd[1568]: time="2026-03-10T01:31:35.723862066Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 10 01:31:35.727521 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 10 01:31:35.729883 containerd[1568]: time="2026-03-10T01:31:35.729798034Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 10 01:31:35.738707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:36.940295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:36.948548 (kubelet)[2193]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:31:37.260356 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459548675.mount: Deactivated successfully. Mar 10 01:31:37.282500 kubelet[2193]: E0310 01:31:37.282033 2193 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:31:37.286364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:31:37.286793 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:31:37.287714 systemd[1]: kubelet.service: Consumed 1.365s CPU time, 107.9M memory peak. Mar 10 01:31:42.506323 containerd[1568]: time="2026-03-10T01:31:42.505927977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:42.508464 containerd[1568]: time="2026-03-10T01:31:42.508421507Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 10 01:31:42.513336 containerd[1568]: time="2026-03-10T01:31:42.511028654Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:42.515776 containerd[1568]: time="2026-03-10T01:31:42.515661526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:42.518769 containerd[1568]: time="2026-03-10T01:31:42.517904862Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 6.78802713s" Mar 10 01:31:42.518769 containerd[1568]: time="2026-03-10T01:31:42.518001111Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 10 01:31:42.520369 containerd[1568]: time="2026-03-10T01:31:42.520312837Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 10 01:31:43.121917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486780306.mount: Deactivated successfully. Mar 10 01:31:43.135351 containerd[1568]: time="2026-03-10T01:31:43.134977722Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:43.137515 containerd[1568]: time="2026-03-10T01:31:43.137302683Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 10 01:31:43.139839 containerd[1568]: time="2026-03-10T01:31:43.139710844Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:43.144370 containerd[1568]: time="2026-03-10T01:31:43.144223827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:43.145902 containerd[1568]: time="2026-03-10T01:31:43.145001214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 624.65768ms" Mar 10 01:31:43.145902 containerd[1568]: time="2026-03-10T01:31:43.145088535Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 10 01:31:43.146138 containerd[1568]: time="2026-03-10T01:31:43.146078023Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 10 01:31:43.918082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534110663.mount: Deactivated successfully. Mar 10 01:31:47.486074 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 10 01:31:47.497070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:48.009885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:48.034676 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 10 01:31:48.424105 kubelet[2322]: E0310 01:31:48.421779 2322 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 10 01:31:48.426534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 10 01:31:48.426938 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 10 01:31:48.428408 systemd[1]: kubelet.service: Consumed 849ms CPU time, 107.8M memory peak. Mar 10 01:31:48.803734 containerd[1568]: time="2026-03-10T01:31:48.803532401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:48.805491 containerd[1568]: time="2026-03-10T01:31:48.805420549Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 10 01:31:48.808250 containerd[1568]: time="2026-03-10T01:31:48.808061380Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:48.813931 containerd[1568]: time="2026-03-10T01:31:48.813875424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:31:48.815500 containerd[1568]: time="2026-03-10T01:31:48.814826123Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 5.668720658s" Mar 10 01:31:48.815630 containerd[1568]: time="2026-03-10T01:31:48.815513266Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 10 01:31:51.833291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:51.833756 systemd[1]: kubelet.service: Consumed 849ms CPU time, 107.8M memory peak. Mar 10 01:31:51.837758 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:51.905932 systemd[1]: Reload requested from client PID 2371 ('systemctl') (unit session-9.scope)... Mar 10 01:31:51.906083 systemd[1]: Reloading... Mar 10 01:31:52.145860 zram_generator::config[2420]: No configuration found. Mar 10 01:31:52.612841 systemd[1]: Reloading finished in 705 ms. Mar 10 01:31:52.732440 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 10 01:31:52.733033 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 10 01:31:52.733673 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:52.733733 systemd[1]: kubelet.service: Consumed 259ms CPU time, 98.1M memory peak. Mar 10 01:31:52.737768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:31:53.056396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:31:53.087463 (kubelet)[2463]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:31:53.202513 kubelet[2463]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:31:53.494927 kubelet[2463]: I0310 01:31:53.494638 2463 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 01:31:53.494927 kubelet[2463]: I0310 01:31:53.494707 2463 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:31:53.494927 kubelet[2463]: I0310 01:31:53.494732 2463 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:31:53.494927 kubelet[2463]: I0310 01:31:53.494738 2463 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:31:53.495965 kubelet[2463]: I0310 01:31:53.495677 2463 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 01:31:53.521202 kubelet[2463]: E0310 01:31:53.520815 2463 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:31:53.523840 kubelet[2463]: I0310 01:31:53.523669 2463 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:31:53.547646 kubelet[2463]: I0310 01:31:53.545509 2463 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 01:31:53.556054 kubelet[2463]: I0310 01:31:53.555973 2463 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:31:53.559970 kubelet[2463]: I0310 01:31:53.559859 2463 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:31:53.560199 kubelet[2463]: I0310 01:31:53.559934 2463 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:31:53.560199 kubelet[2463]: I0310 01:31:53.560188 2463 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 01:31:53.561082 kubelet[2463]: I0310 01:31:53.560203 2463 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 01:31:53.561082 kubelet[2463]: I0310 01:31:53.560461 2463 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:31:53.563574 kubelet[2463]: I0310 01:31:53.563485 2463 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 01:31:53.564176 kubelet[2463]: I0310 01:31:53.564087 2463 kubelet.go:482] "Attempting to sync node with API server" Mar 10 01:31:53.564176 kubelet[2463]: I0310 01:31:53.564169 2463 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:31:53.564286 kubelet[2463]: I0310 01:31:53.564206 2463 kubelet.go:394] "Adding apiserver pod source" Mar 10 01:31:53.564286 kubelet[2463]: I0310 01:31:53.564221 2463 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:31:53.573635 kubelet[2463]: I0310 01:31:53.573369 2463 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 01:31:53.577320 kubelet[2463]: I0310 01:31:53.577236 2463 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:31:53.577320 kubelet[2463]: I0310 01:31:53.577303 2463 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:31:53.577436 kubelet[2463]: W0310 01:31:53.577402 2463 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 10 01:31:53.585081 kubelet[2463]: I0310 01:31:53.585030 2463 server.go:1257] "Started kubelet" Mar 10 01:31:53.585540 kubelet[2463]: I0310 01:31:53.585438 2463 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:31:53.588257 kubelet[2463]: I0310 01:31:53.585319 2463 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:31:53.588257 kubelet[2463]: I0310 01:31:53.586789 2463 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:31:53.588257 kubelet[2463]: I0310 01:31:53.587157 2463 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:31:53.588257 kubelet[2463]: I0310 01:31:53.587217 2463 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:31:53.589630 kubelet[2463]: I0310 01:31:53.589173 2463 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 01:31:53.589630 kubelet[2463]: I0310 01:31:53.589288 2463 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:31:53.598634 kubelet[2463]: E0310 01:31:53.597546 2463 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:31:53.598634 kubelet[2463]: I0310 01:31:53.597667 2463 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 01:31:53.598634 kubelet[2463]: I0310 01:31:53.597937 2463 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:31:53.598634 kubelet[2463]: I0310 01:31:53.597995 2463 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:31:53.599397 kubelet[2463]: E0310 01:31:53.599366 2463 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Mar 10 01:31:53.607071 kubelet[2463]: E0310 01:31:53.597077 2463 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189b56cc423c82fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-10 01:31:53.585001212 +0000 UTC m=+0.490912122,LastTimestamp:2026-03-10 01:31:53.585001212 +0000 UTC m=+0.490912122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 10 01:31:53.611421 kubelet[2463]: E0310 01:31:53.611383 2463 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:31:53.611794 kubelet[2463]: I0310 01:31:53.611751 2463 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:31:53.611794 kubelet[2463]: I0310 01:31:53.611766 2463 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:31:53.611794 kubelet[2463]: I0310 01:31:53.611849 2463 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:31:53.633382 kubelet[2463]: I0310 01:31:53.633314 2463 cpu_manager.go:225] "Starting" policy="none" Mar 10 01:31:53.633382 kubelet[2463]: I0310 01:31:53.633352 2463 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 01:31:53.633382 kubelet[2463]: I0310 01:31:53.633369 2463 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 01:31:53.638738 kubelet[2463]: I0310 01:31:53.638418 2463 policy_none.go:50] "Start" Mar 10 01:31:53.638738 kubelet[2463]: I0310 01:31:53.638465 2463 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:31:53.638738 kubelet[2463]: I0310 01:31:53.638480 2463 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:31:53.641658 kubelet[2463]: I0310 01:31:53.641502 2463 policy_none.go:44] "Start" Mar 10 01:31:53.649245 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 10 01:31:53.650046 kubelet[2463]: I0310 01:31:53.649929 2463 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:31:53.651929 kubelet[2463]: I0310 01:31:53.651867 2463 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:31:53.651929 kubelet[2463]: I0310 01:31:53.651920 2463 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 01:31:53.652043 kubelet[2463]: I0310 01:31:53.651947 2463 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 01:31:53.652043 kubelet[2463]: E0310 01:31:53.652005 2463 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:31:53.677856 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 10 01:31:53.700830 kubelet[2463]: E0310 01:31:53.700108 2463 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:31:53.737513 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 10 01:31:53.754229 kubelet[2463]: E0310 01:31:53.752925 2463 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:31:53.764494 kubelet[2463]: E0310 01:31:53.764436 2463 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:31:53.764936 kubelet[2463]: I0310 01:31:53.764915 2463 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 01:31:53.765057 kubelet[2463]: I0310 01:31:53.764929 2463 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:31:53.765389 kubelet[2463]: I0310 01:31:53.765339 2463 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 01:31:53.771153 kubelet[2463]: E0310 01:31:53.771038 2463 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:31:53.771437 kubelet[2463]: E0310 01:31:53.771369 2463 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 10 01:31:53.802216 kubelet[2463]: E0310 01:31:53.801970 2463 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Mar 10 01:31:53.876628 kubelet[2463]: I0310 01:31:53.875047 2463 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:31:53.876919 kubelet[2463]: E0310 01:31:53.876801 2463 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 10 01:31:53.999778 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 10 01:31:54.002758 kubelet[2463]: I0310 01:31:54.001015 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:31:54.002758 kubelet[2463]: I0310 01:31:54.001058 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:31:54.002758 kubelet[2463]: I0310 01:31:54.001087 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:31:54.002758 kubelet[2463]: I0310 01:31:54.001108 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e8a87e19b885c9fb9d42b9c3defccca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7e8a87e19b885c9fb9d42b9c3defccca\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:31:54.002758 kubelet[2463]: I0310 01:31:54.001188 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e8a87e19b885c9fb9d42b9c3defccca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7e8a87e19b885c9fb9d42b9c3defccca\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:31:54.003016 kubelet[2463]: I0310 01:31:54.001208 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:31:54.003016 kubelet[2463]: I0310 01:31:54.001230 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:31:54.003016 kubelet[2463]: I0310 01:31:54.001249 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:31:54.003016 kubelet[2463]: I0310 01:31:54.001269 2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e8a87e19b885c9fb9d42b9c3defccca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7e8a87e19b885c9fb9d42b9c3defccca\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:31:54.046614 kubelet[2463]: E0310 01:31:54.046076 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:54.106181 kubelet[2463]: I0310 01:31:54.105972 2463 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:31:54.107023 kubelet[2463]: E0310 01:31:54.106737 2463 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 10 01:31:54.148404 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 10 01:31:54.210857 kubelet[2463]: E0310 01:31:54.210695 2463 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Mar 10 01:31:54.219093 kubelet[2463]: E0310 01:31:54.218043 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:54.226958 systemd[1]: Created slice kubepods-burstable-pod7e8a87e19b885c9fb9d42b9c3defccca.slice - libcontainer container kubepods-burstable-pod7e8a87e19b885c9fb9d42b9c3defccca.slice. Mar 10 01:31:54.230354 kubelet[2463]: E0310 01:31:54.226961 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:54.240315 containerd[1568]: time="2026-03-10T01:31:54.237802152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 10 01:31:54.289482 kubelet[2463]: E0310 01:31:54.289237 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:54.326518 kubelet[2463]: E0310 01:31:54.325104 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:54.330815 containerd[1568]: time="2026-03-10T01:31:54.330307362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7e8a87e19b885c9fb9d42b9c3defccca,Namespace:kube-system,Attempt:0,}" Mar 10 01:31:54.355854 kubelet[2463]: E0310 01:31:54.355458 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:54.357354 containerd[1568]: time="2026-03-10T01:31:54.357258950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 10 01:31:54.510052 kubelet[2463]: I0310 01:31:54.509940 2463 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:31:54.510720 kubelet[2463]: E0310 01:31:54.510431 2463 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 10 01:31:54.940876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198697510.mount: Deactivated successfully. Mar 10 01:31:54.953882 containerd[1568]: time="2026-03-10T01:31:54.953477912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:31:54.960161 containerd[1568]: time="2026-03-10T01:31:54.960008127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 10 01:31:54.964425 containerd[1568]: time="2026-03-10T01:31:54.964192422Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:31:54.972696 containerd[1568]: time="2026-03-10T01:31:54.972347946Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:31:54.979218 containerd[1568]: time="2026-03-10T01:31:54.979147965Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 01:31:54.979617 containerd[1568]: time="2026-03-10T01:31:54.979493094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:31:54.981332 containerd[1568]: time="2026-03-10T01:31:54.980991161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 10 01:31:54.982690 containerd[1568]: time="2026-03-10T01:31:54.982240689Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 725.756156ms" Mar 10 01:31:54.987316 containerd[1568]: time="2026-03-10T01:31:54.985321684Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 10 01:31:54.993269 containerd[1568]: time="2026-03-10T01:31:54.992170124Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.641358ms" Mar 10 01:31:54.996676 containerd[1568]: time="2026-03-10T01:31:54.996181257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 656.479837ms" Mar 10 01:31:55.014530 kubelet[2463]: E0310 01:31:55.014271 2463 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Mar 10 01:31:55.066895 containerd[1568]: time="2026-03-10T01:31:55.065424427Z" level=info msg="connecting to shim e4282c60af8917a9c962ce2fdeacd8608c1ea1bbdab381ea4f4eddfff375507e" address="unix:///run/containerd/s/6b774a056463f36f43634bfe86ef8850ec4092a1f0641d6de7f1331eec822846" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:31:55.078479 containerd[1568]: time="2026-03-10T01:31:55.078404046Z" level=info msg="connecting to shim f4818a0239be2ebb1d3e0e9abf88933e2a8111446b72486f467b8beb747b2065" address="unix:///run/containerd/s/93889863b56220b0a7930396cfdb49fe50aeab7a74a148933083c2f2fc3e61a1" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:31:55.085289 containerd[1568]: time="2026-03-10T01:31:55.085047931Z" level=info msg="connecting to shim 9c46f6b76cc108c6c69fd5304fc44339cdf972f324d5ff5d6d0838c1c4c241df" address="unix:///run/containerd/s/b517a1e3fed7ac4b39612d54ca54d7b9e9602e298d0273846c33eda4f7151eec" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:31:55.128808 systemd[1]: Started cri-containerd-f4818a0239be2ebb1d3e0e9abf88933e2a8111446b72486f467b8beb747b2065.scope - libcontainer container f4818a0239be2ebb1d3e0e9abf88933e2a8111446b72486f467b8beb747b2065. Mar 10 01:31:55.137227 systemd[1]: Started cri-containerd-9c46f6b76cc108c6c69fd5304fc44339cdf972f324d5ff5d6d0838c1c4c241df.scope - libcontainer container 9c46f6b76cc108c6c69fd5304fc44339cdf972f324d5ff5d6d0838c1c4c241df. Mar 10 01:31:55.149714 systemd[1]: Started cri-containerd-e4282c60af8917a9c962ce2fdeacd8608c1ea1bbdab381ea4f4eddfff375507e.scope - libcontainer container e4282c60af8917a9c962ce2fdeacd8608c1ea1bbdab381ea4f4eddfff375507e. Mar 10 01:31:55.274958 containerd[1568]: time="2026-03-10T01:31:55.274210457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7e8a87e19b885c9fb9d42b9c3defccca,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c46f6b76cc108c6c69fd5304fc44339cdf972f324d5ff5d6d0838c1c4c241df\"" Mar 10 01:31:55.276053 kubelet[2463]: E0310 01:31:55.275701 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:55.286775 containerd[1568]: time="2026-03-10T01:31:55.286724924Z" level=info msg="CreateContainer within sandbox \"9c46f6b76cc108c6c69fd5304fc44339cdf972f324d5ff5d6d0838c1c4c241df\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 10 01:31:55.306996 containerd[1568]: time="2026-03-10T01:31:55.306337924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4282c60af8917a9c962ce2fdeacd8608c1ea1bbdab381ea4f4eddfff375507e\"" Mar 10 01:31:55.318048 kubelet[2463]: I0310 01:31:55.318004 2463 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:31:55.319409 kubelet[2463]: E0310 01:31:55.319368 2463 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Mar 10 01:31:55.319688 kubelet[2463]: E0310 01:31:55.319523 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:55.339652 containerd[1568]: time="2026-03-10T01:31:55.339185139Z" level=info msg="CreateContainer within sandbox \"e4282c60af8917a9c962ce2fdeacd8608c1ea1bbdab381ea4f4eddfff375507e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 10 01:31:55.707900 kubelet[2463]: E0310 01:31:55.702839 2463 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 10 01:31:55.832915 containerd[1568]: time="2026-03-10T01:31:55.831817713Z" level=info msg="Container 9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:31:55.834953 containerd[1568]: time="2026-03-10T01:31:55.834894492Z" level=info msg="Container f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:31:55.849925 containerd[1568]: time="2026-03-10T01:31:55.849310438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4818a0239be2ebb1d3e0e9abf88933e2a8111446b72486f467b8beb747b2065\"" Mar 10 01:31:55.856340 kubelet[2463]: E0310 01:31:55.852346 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:55.951632 containerd[1568]: time="2026-03-10T01:31:55.951331828Z" level=info msg="CreateContainer within sandbox \"9c46f6b76cc108c6c69fd5304fc44339cdf972f324d5ff5d6d0838c1c4c241df\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a\"" Mar 10 01:31:55.958278 containerd[1568]: time="2026-03-10T01:31:55.957703926Z" level=info msg="StartContainer for \"f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a\"" Mar 10 01:31:55.962203 containerd[1568]: time="2026-03-10T01:31:55.960841674Z" level=info msg="CreateContainer within sandbox \"f4818a0239be2ebb1d3e0e9abf88933e2a8111446b72486f467b8beb747b2065\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 10 01:31:55.962203 containerd[1568]: time="2026-03-10T01:31:55.961630568Z" level=info msg="connecting to shim f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a" address="unix:///run/containerd/s/b517a1e3fed7ac4b39612d54ca54d7b9e9602e298d0273846c33eda4f7151eec" protocol=ttrpc version=3 Mar 10 01:31:55.963660 containerd[1568]: time="2026-03-10T01:31:55.963231207Z" level=info msg="CreateContainer within sandbox \"e4282c60af8917a9c962ce2fdeacd8608c1ea1bbdab381ea4f4eddfff375507e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6\"" Mar 10 01:31:55.965615 containerd[1568]: time="2026-03-10T01:31:55.965155778Z" level=info msg="StartContainer for \"9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6\"" Mar 10 01:31:55.967353 containerd[1568]: time="2026-03-10T01:31:55.966558168Z" level=info msg="connecting to shim 9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6" address="unix:///run/containerd/s/6b774a056463f36f43634bfe86ef8850ec4092a1f0641d6de7f1331eec822846" protocol=ttrpc version=3 Mar 10 01:31:56.034062 containerd[1568]: time="2026-03-10T01:31:56.033879232Z" level=info msg="Container c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:31:56.050260 containerd[1568]: time="2026-03-10T01:31:56.050206954Z" level=info msg="CreateContainer within sandbox \"f4818a0239be2ebb1d3e0e9abf88933e2a8111446b72486f467b8beb747b2065\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e\"" Mar 10 01:31:56.050831 containerd[1568]: time="2026-03-10T01:31:56.050805649Z" level=info msg="StartContainer for \"c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e\"" Mar 10 01:31:56.052474 containerd[1568]: time="2026-03-10T01:31:56.052382054Z" level=info msg="connecting to shim c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e" address="unix:///run/containerd/s/93889863b56220b0a7930396cfdb49fe50aeab7a74a148933083c2f2fc3e61a1" protocol=ttrpc version=3 Mar 10 01:31:56.054838 systemd[1]: Started cri-containerd-9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6.scope - libcontainer container 9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6. Mar 10 01:31:56.059247 systemd[1]: Started cri-containerd-f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a.scope - libcontainer container f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a. Mar 10 01:31:56.172998 systemd[1]: Started cri-containerd-c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e.scope - libcontainer container c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e. Mar 10 01:31:56.267225 containerd[1568]: time="2026-03-10T01:31:56.266903904Z" level=info msg="StartContainer for \"9c3e941357c8e58e2e27c8dbc5dbd182e4afc326d0d11b6692819697233074e6\" returns successfully" Mar 10 01:31:56.314501 containerd[1568]: time="2026-03-10T01:31:56.313687341Z" level=info msg="StartContainer for \"c67a99e1eace42aa64221d343adb2a741d7140ff9a2ca70dcc9402aa60f8fc5e\" returns successfully" Mar 10 01:31:56.325227 containerd[1568]: time="2026-03-10T01:31:56.325087369Z" level=info msg="StartContainer for \"f70c12247c2d0fddaded712dd5b6ea125cd49f794a4e4ceee2c67d907831043a\" returns successfully" Mar 10 01:31:56.854670 kubelet[2463]: E0310 01:31:56.853100 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:56.854670 kubelet[2463]: E0310 01:31:56.853448 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:56.864507 kubelet[2463]: E0310 01:31:56.864426 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:56.864863 kubelet[2463]: E0310 01:31:56.864740 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:56.871171 kubelet[2463]: E0310 01:31:56.870444 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:56.872938 kubelet[2463]: E0310 01:31:56.872765 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:56.924316 kubelet[2463]: I0310 01:31:56.923949 2463 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:31:57.903017 kubelet[2463]: E0310 01:31:57.897738 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:57.903017 kubelet[2463]: E0310 01:31:57.897867 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:57.903017 kubelet[2463]: E0310 01:31:57.898345 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:57.903017 kubelet[2463]: E0310 01:31:57.901069 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:57.903017 kubelet[2463]: E0310 01:31:57.901499 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:57.910313 kubelet[2463]: E0310 01:31:57.904883 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:58.894301 kubelet[2463]: E0310 01:31:58.894001 2463 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 10 01:31:58.894301 kubelet[2463]: E0310 01:31:58.894457 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:31:59.812723 kubelet[2463]: E0310 01:31:59.811939 2463 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 10 01:31:59.938004 kubelet[2463]: I0310 01:31:59.937501 2463 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 10 01:32:00.005542 kubelet[2463]: I0310 01:32:00.005058 2463 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:00.065642 kubelet[2463]: E0310 01:32:00.064473 2463 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:00.065642 kubelet[2463]: I0310 01:32:00.064733 2463 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:32:00.075422 kubelet[2463]: E0310 01:32:00.074957 2463 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 01:32:00.075422 kubelet[2463]: I0310 01:32:00.075074 2463 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:32:00.096086 kubelet[2463]: I0310 01:32:00.095326 2463 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:32:00.102753 kubelet[2463]: E0310 01:32:00.101010 2463 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 10 01:32:00.102753 kubelet[2463]: E0310 01:32:00.101663 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:00.103325 kubelet[2463]: E0310 01:32:00.102917 2463 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 10 01:32:00.585859 kubelet[2463]: I0310 01:32:00.585235 2463 apiserver.go:52] "Watching apiserver" Mar 10 01:32:00.700450 kubelet[2463]: I0310 01:32:00.700036 2463 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:32:01.107766 kubelet[2463]: I0310 01:32:01.107269 2463 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:01.119415 kubelet[2463]: E0310 01:32:01.119288 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:01.914067 kubelet[2463]: E0310 01:32:01.913702 2463 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:03.140924 systemd[1]: Reload requested from client PID 2747 ('systemctl') (unit session-9.scope)... Mar 10 01:32:03.140943 systemd[1]: Reloading... Mar 10 01:32:03.379086 zram_generator::config[2796]: No configuration found. Mar 10 01:32:03.745911 kubelet[2463]: I0310 01:32:03.745553 2463 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.745506148 podStartE2EDuration="2.745506148s" podCreationTimestamp="2026-03-10 01:32:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:32:03.745349014 +0000 UTC m=+10.651259934" watchObservedRunningTime="2026-03-10 01:32:03.745506148 +0000 UTC m=+10.651417058" Mar 10 01:32:04.035969 systemd[1]: Reloading finished in 894 ms. Mar 10 01:32:04.102172 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:32:04.124555 systemd[1]: kubelet.service: Deactivated successfully. Mar 10 01:32:04.125013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:32:04.125096 systemd[1]: kubelet.service: Consumed 2.551s CPU time, 128.1M memory peak. Mar 10 01:32:04.128473 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 10 01:32:04.573875 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 10 01:32:04.594291 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 10 01:32:04.752317 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 10 01:32:04.766433 kubelet[2837]: I0310 01:32:04.765989 2837 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 10 01:32:04.766433 kubelet[2837]: I0310 01:32:04.766230 2837 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 10 01:32:04.766433 kubelet[2837]: I0310 01:32:04.766258 2837 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 10 01:32:04.766433 kubelet[2837]: I0310 01:32:04.766267 2837 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 10 01:32:04.766973 kubelet[2837]: I0310 01:32:04.766727 2837 server.go:951] "Client rotation is on, will bootstrap in background" Mar 10 01:32:04.777902 kubelet[2837]: I0310 01:32:04.777846 2837 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 10 01:32:04.781632 kubelet[2837]: I0310 01:32:04.781073 2837 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 10 01:32:04.794641 kubelet[2837]: I0310 01:32:04.793216 2837 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 10 01:32:04.803306 kubelet[2837]: I0310 01:32:04.803219 2837 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 10 01:32:04.803988 kubelet[2837]: I0310 01:32:04.803771 2837 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 10 01:32:04.804070 kubelet[2837]: I0310 01:32:04.803841 2837 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 10 01:32:04.804070 kubelet[2837]: I0310 01:32:04.804039 2837 topology_manager.go:143] "Creating topology manager with none policy" Mar 10 01:32:04.804070 kubelet[2837]: I0310 01:32:04.804050 2837 container_manager_linux.go:308] "Creating device plugin manager" Mar 10 01:32:04.804336 kubelet[2837]: I0310 01:32:04.804083 2837 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 10 01:32:04.804710 kubelet[2837]: I0310 01:32:04.804657 2837 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 10 01:32:04.804924 kubelet[2837]: I0310 01:32:04.804903 2837 kubelet.go:482] "Attempting to sync node with API server" Mar 10 01:32:04.806745 kubelet[2837]: I0310 01:32:04.804929 2837 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 10 01:32:04.806745 kubelet[2837]: I0310 01:32:04.804952 2837 kubelet.go:394] "Adding apiserver pod source" Mar 10 01:32:04.806745 kubelet[2837]: I0310 01:32:04.804964 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 10 01:32:04.808016 kubelet[2837]: I0310 01:32:04.807950 2837 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 10 01:32:04.809355 kubelet[2837]: I0310 01:32:04.809296 2837 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 10 01:32:04.809408 kubelet[2837]: I0310 01:32:04.809372 2837 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 10 01:32:04.816044 kubelet[2837]: I0310 01:32:04.815461 2837 server.go:1257] "Started kubelet" Mar 10 01:32:04.816244 kubelet[2837]: I0310 01:32:04.815872 2837 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 10 01:32:04.818788 kubelet[2837]: I0310 01:32:04.817513 2837 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 10 01:32:04.818788 kubelet[2837]: I0310 01:32:04.818754 2837 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 10 01:32:04.819288 kubelet[2837]: I0310 01:32:04.819239 2837 server.go:317] "Adding debug handlers to kubelet server" Mar 10 01:32:04.825834 kubelet[2837]: I0310 01:32:04.825702 2837 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 10 01:32:04.846748 kubelet[2837]: I0310 01:32:04.846711 2837 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 10 01:32:04.848623 kubelet[2837]: I0310 01:32:04.848311 2837 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 10 01:32:04.852221 kubelet[2837]: I0310 01:32:04.852194 2837 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 10 01:32:04.852795 kubelet[2837]: E0310 01:32:04.852770 2837 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 10 01:32:04.854384 kubelet[2837]: I0310 01:32:04.854362 2837 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 10 01:32:04.859411 kubelet[2837]: I0310 01:32:04.859334 2837 reconciler.go:29] "Reconciler: start to sync state" Mar 10 01:32:04.861183 kubelet[2837]: I0310 01:32:04.861004 2837 factory.go:223] Registration of the systemd container factory successfully Mar 10 01:32:04.861491 kubelet[2837]: I0310 01:32:04.861427 2837 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 10 01:32:04.872801 kubelet[2837]: I0310 01:32:04.871794 2837 factory.go:223] Registration of the containerd container factory successfully Mar 10 01:32:04.888682 kubelet[2837]: E0310 01:32:04.888527 2837 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 10 01:32:04.894351 kubelet[2837]: I0310 01:32:04.894312 2837 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 10 01:32:04.900382 kubelet[2837]: I0310 01:32:04.900347 2837 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 10 01:32:04.901245 kubelet[2837]: I0310 01:32:04.901225 2837 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 10 01:32:04.901927 kubelet[2837]: I0310 01:32:04.901910 2837 kubelet.go:2501] "Starting kubelet main sync loop" Mar 10 01:32:04.902843 kubelet[2837]: E0310 01:32:04.902810 2837 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 10 01:32:05.005388 kubelet[2837]: E0310 01:32:05.003877 2837 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.023469 2837 cpu_manager.go:225] "Starting" policy="none" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.023511 2837 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.023542 2837 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.024316 2837 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.024337 2837 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.024367 2837 policy_none.go:50] "Start" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.024386 2837 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 10 01:32:05.024960 kubelet[2837]: I0310 01:32:05.024405 2837 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 10 01:32:05.026965 kubelet[2837]: I0310 01:32:05.025256 2837 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 10 01:32:05.026965 kubelet[2837]: I0310 01:32:05.025272 2837 policy_none.go:44] "Start" Mar 10 01:32:05.038178 kubelet[2837]: E0310 01:32:05.038040 2837 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 10 01:32:05.038762 kubelet[2837]: I0310 01:32:05.038354 2837 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 10 01:32:05.038762 kubelet[2837]: I0310 01:32:05.038438 2837 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 10 01:32:05.039443 kubelet[2837]: I0310 01:32:05.039355 2837 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 10 01:32:05.049548 kubelet[2837]: E0310 01:32:05.049049 2837 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 10 01:32:05.162292 kubelet[2837]: I0310 01:32:05.159252 2837 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 10 01:32:05.178055 kubelet[2837]: I0310 01:32:05.177940 2837 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 10 01:32:05.179009 kubelet[2837]: I0310 01:32:05.178883 2837 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 10 01:32:05.206543 kubelet[2837]: I0310 01:32:05.206448 2837 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 10 01:32:05.207041 kubelet[2837]: I0310 01:32:05.206994 2837 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 10 01:32:05.207528 kubelet[2837]: I0310 01:32:05.207321 2837 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.221555 kubelet[2837]: E0310 01:32:05.221479 2837 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.262364 kubelet[2837]: I0310 01:32:05.262173 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e8a87e19b885c9fb9d42b9c3defccca-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7e8a87e19b885c9fb9d42b9c3defccca\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:32:05.262364 kubelet[2837]: I0310 01:32:05.262282 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.262364 kubelet[2837]: I0310 01:32:05.262314 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.262364 kubelet[2837]: I0310 01:32:05.262331 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 10 01:32:05.262364 kubelet[2837]: I0310 01:32:05.262346 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e8a87e19b885c9fb9d42b9c3defccca-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7e8a87e19b885c9fb9d42b9c3defccca\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:32:05.262865 kubelet[2837]: I0310 01:32:05.262361 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e8a87e19b885c9fb9d42b9c3defccca-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7e8a87e19b885c9fb9d42b9c3defccca\") " pod="kube-system/kube-apiserver-localhost" Mar 10 01:32:05.262865 kubelet[2837]: I0310 01:32:05.262375 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.262865 kubelet[2837]: I0310 01:32:05.262387 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.262865 kubelet[2837]: I0310 01:32:05.262524 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 10 01:32:05.521009 kubelet[2837]: E0310 01:32:05.520426 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:05.524286 kubelet[2837]: E0310 01:32:05.521845 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:05.524821 kubelet[2837]: E0310 01:32:05.524518 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:05.809393 kubelet[2837]: I0310 01:32:05.806714 2837 apiserver.go:52] "Watching apiserver" Mar 10 01:32:05.944904 kubelet[2837]: E0310 01:32:05.942213 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:05.944904 kubelet[2837]: E0310 01:32:05.943412 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:05.949413 kubelet[2837]: E0310 01:32:05.947713 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:05.955235 kubelet[2837]: I0310 01:32:05.955198 2837 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 10 01:32:06.002623 kubelet[2837]: I0310 01:32:06.001504 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.001487763 podStartE2EDuration="1.001487763s" podCreationTimestamp="2026-03-10 01:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:32:06.001231175 +0000 UTC m=+1.396345922" watchObservedRunningTime="2026-03-10 01:32:06.001487763 +0000 UTC m=+1.396602500" Mar 10 01:32:06.100903 kubelet[2837]: I0310 01:32:06.098381 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.098364653 podStartE2EDuration="1.098364653s" podCreationTimestamp="2026-03-10 01:32:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:32:06.024122448 +0000 UTC m=+1.419237205" watchObservedRunningTime="2026-03-10 01:32:06.098364653 +0000 UTC m=+1.493479389" Mar 10 01:32:06.961322 kubelet[2837]: E0310 01:32:06.960753 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:06.961322 kubelet[2837]: E0310 01:32:06.960874 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:07.422236 kubelet[2837]: E0310 01:32:07.418841 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:07.998057 kubelet[2837]: E0310 01:32:07.994798 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:08.086083 kubelet[2837]: E0310 01:32:07.998740 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:08.451311 kubelet[2837]: I0310 01:32:08.449838 2837 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 10 01:32:08.451311 kubelet[2837]: I0310 01:32:08.451340 2837 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 10 01:32:08.452306 containerd[1568]: time="2026-03-10T01:32:08.450812464Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 10 01:32:08.734808 kubelet[2837]: I0310 01:32:08.716410 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db414e4e-a593-4f82-aa01-cb0f7057fe0a-kube-proxy\") pod \"kube-proxy-zb7m5\" (UID: \"db414e4e-a593-4f82-aa01-cb0f7057fe0a\") " pod="kube-system/kube-proxy-zb7m5" Mar 10 01:32:08.734808 kubelet[2837]: I0310 01:32:08.732950 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db414e4e-a593-4f82-aa01-cb0f7057fe0a-xtables-lock\") pod \"kube-proxy-zb7m5\" (UID: \"db414e4e-a593-4f82-aa01-cb0f7057fe0a\") " pod="kube-system/kube-proxy-zb7m5" Mar 10 01:32:08.734808 kubelet[2837]: I0310 01:32:08.732992 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db414e4e-a593-4f82-aa01-cb0f7057fe0a-lib-modules\") pod \"kube-proxy-zb7m5\" (UID: \"db414e4e-a593-4f82-aa01-cb0f7057fe0a\") " pod="kube-system/kube-proxy-zb7m5" Mar 10 01:32:08.745897 systemd[1]: Created slice kubepods-besteffort-poddb414e4e_a593_4f82_aa01_cb0f7057fe0a.slice - libcontainer container kubepods-besteffort-poddb414e4e_a593_4f82_aa01_cb0f7057fe0a.slice. Mar 10 01:32:08.834239 kubelet[2837]: I0310 01:32:08.833950 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dh5l\" (UniqueName: \"kubernetes.io/projected/db414e4e-a593-4f82-aa01-cb0f7057fe0a-kube-api-access-8dh5l\") pod \"kube-proxy-zb7m5\" (UID: \"db414e4e-a593-4f82-aa01-cb0f7057fe0a\") " pod="kube-system/kube-proxy-zb7m5" Mar 10 01:32:09.200490 kubelet[2837]: E0310 01:32:09.198136 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:09.222378 containerd[1568]: time="2026-03-10T01:32:09.218703103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb7m5,Uid:db414e4e-a593-4f82-aa01-cb0f7057fe0a,Namespace:kube-system,Attempt:0,}" Mar 10 01:32:09.350783 kubelet[2837]: I0310 01:32:09.350724 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c4a7d2b-c453-4bc4-9094-91e4bd1e90e3-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-rhh5f\" (UID: \"3c4a7d2b-c453-4bc4-9094-91e4bd1e90e3\") " pod="tigera-operator/tigera-operator-6cf4cccc57-rhh5f" Mar 10 01:32:09.350783 kubelet[2837]: I0310 01:32:09.350775 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf58r\" (UniqueName: \"kubernetes.io/projected/3c4a7d2b-c453-4bc4-9094-91e4bd1e90e3-kube-api-access-mf58r\") pod \"tigera-operator-6cf4cccc57-rhh5f\" (UID: \"3c4a7d2b-c453-4bc4-9094-91e4bd1e90e3\") " pod="tigera-operator/tigera-operator-6cf4cccc57-rhh5f" Mar 10 01:32:09.388753 systemd[1]: Created slice kubepods-besteffort-pod3c4a7d2b_c453_4bc4_9094_91e4bd1e90e3.slice - libcontainer container kubepods-besteffort-pod3c4a7d2b_c453_4bc4_9094_91e4bd1e90e3.slice. Mar 10 01:32:09.402771 containerd[1568]: time="2026-03-10T01:32:09.401788691Z" level=info msg="connecting to shim 6d7a86a889a5110b5f7178bf1582b90695b13cb1f9252c1ffdee2f647823bf98" address="unix:///run/containerd/s/e2496d99d4224e4963aa1f3871d41d8935115a051dbe03e182a0d597e6ebae6b" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:32:09.706482 containerd[1568]: time="2026-03-10T01:32:09.706205774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-rhh5f,Uid:3c4a7d2b-c453-4bc4-9094-91e4bd1e90e3,Namespace:tigera-operator,Attempt:0,}" Mar 10 01:32:09.721760 systemd[1]: Started cri-containerd-6d7a86a889a5110b5f7178bf1582b90695b13cb1f9252c1ffdee2f647823bf98.scope - libcontainer container 6d7a86a889a5110b5f7178bf1582b90695b13cb1f9252c1ffdee2f647823bf98. Mar 10 01:32:09.835768 containerd[1568]: time="2026-03-10T01:32:09.833234387Z" level=info msg="connecting to shim c3c49f723508c980e3f3fc84b4d19fd53c2ca4a46c270e4344aecdecadb5b260" address="unix:///run/containerd/s/33ab04830ec5534a2464d7d5c0d48ba3a3fcf2d128d3054335c1d9f9ddc0b9f9" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:32:09.939955 systemd[1]: Started cri-containerd-c3c49f723508c980e3f3fc84b4d19fd53c2ca4a46c270e4344aecdecadb5b260.scope - libcontainer container c3c49f723508c980e3f3fc84b4d19fd53c2ca4a46c270e4344aecdecadb5b260. Mar 10 01:32:09.985789 containerd[1568]: time="2026-03-10T01:32:09.985554117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zb7m5,Uid:db414e4e-a593-4f82-aa01-cb0f7057fe0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d7a86a889a5110b5f7178bf1582b90695b13cb1f9252c1ffdee2f647823bf98\"" Mar 10 01:32:09.988290 kubelet[2837]: E0310 01:32:09.988126 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:10.022282 containerd[1568]: time="2026-03-10T01:32:10.021708954Z" level=info msg="CreateContainer within sandbox \"6d7a86a889a5110b5f7178bf1582b90695b13cb1f9252c1ffdee2f647823bf98\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 10 01:32:10.057864 containerd[1568]: time="2026-03-10T01:32:10.057624160Z" level=info msg="Container e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:32:10.100486 containerd[1568]: time="2026-03-10T01:32:10.100143604Z" level=info msg="CreateContainer within sandbox \"6d7a86a889a5110b5f7178bf1582b90695b13cb1f9252c1ffdee2f647823bf98\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04\"" Mar 10 01:32:10.102831 containerd[1568]: time="2026-03-10T01:32:10.102516126Z" level=info msg="StartContainer for \"e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04\"" Mar 10 01:32:10.108937 containerd[1568]: time="2026-03-10T01:32:10.106537704Z" level=info msg="connecting to shim e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04" address="unix:///run/containerd/s/e2496d99d4224e4963aa1f3871d41d8935115a051dbe03e182a0d597e6ebae6b" protocol=ttrpc version=3 Mar 10 01:32:10.227797 containerd[1568]: time="2026-03-10T01:32:10.227651683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-rhh5f,Uid:3c4a7d2b-c453-4bc4-9094-91e4bd1e90e3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c3c49f723508c980e3f3fc84b4d19fd53c2ca4a46c270e4344aecdecadb5b260\"" Mar 10 01:32:10.233475 containerd[1568]: time="2026-03-10T01:32:10.233375685Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 10 01:32:10.251397 systemd[1]: Started cri-containerd-e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04.scope - libcontainer container e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04. Mar 10 01:32:10.395835 containerd[1568]: time="2026-03-10T01:32:10.395760630Z" level=info msg="StartContainer for \"e62601e7104a015d754ebe2e749279f2aee40d23ea9b185ca6776370e46a8c04\" returns successfully" Mar 10 01:32:11.037519 kubelet[2837]: E0310 01:32:11.037340 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:11.318327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700236822.mount: Deactivated successfully. Mar 10 01:32:12.044802 kubelet[2837]: E0310 01:32:12.044427 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:14.388828 containerd[1568]: time="2026-03-10T01:32:14.388412957Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:14.389910 containerd[1568]: time="2026-03-10T01:32:14.389837141Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 10 01:32:14.391940 containerd[1568]: time="2026-03-10T01:32:14.391870644Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:14.396131 containerd[1568]: time="2026-03-10T01:32:14.396008542Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:14.396630 containerd[1568]: time="2026-03-10T01:32:14.396532337Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.163099296s" Mar 10 01:32:14.396810 containerd[1568]: time="2026-03-10T01:32:14.396652219Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 10 01:32:14.404101 containerd[1568]: time="2026-03-10T01:32:14.404033947Z" level=info msg="CreateContainer within sandbox \"c3c49f723508c980e3f3fc84b4d19fd53c2ca4a46c270e4344aecdecadb5b260\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 10 01:32:14.416371 containerd[1568]: time="2026-03-10T01:32:14.416270375Z" level=info msg="Container d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:32:14.428682 containerd[1568]: time="2026-03-10T01:32:14.428352827Z" level=info msg="CreateContainer within sandbox \"c3c49f723508c980e3f3fc84b4d19fd53c2ca4a46c270e4344aecdecadb5b260\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7\"" Mar 10 01:32:14.430104 containerd[1568]: time="2026-03-10T01:32:14.430031317Z" level=info msg="StartContainer for \"d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7\"" Mar 10 01:32:14.434968 containerd[1568]: time="2026-03-10T01:32:14.434807985Z" level=info msg="connecting to shim d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7" address="unix:///run/containerd/s/33ab04830ec5534a2464d7d5c0d48ba3a3fcf2d128d3054335c1d9f9ddc0b9f9" protocol=ttrpc version=3 Mar 10 01:32:14.492010 systemd[1]: Started cri-containerd-d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7.scope - libcontainer container d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7. Mar 10 01:32:14.622520 containerd[1568]: time="2026-03-10T01:32:14.613531489Z" level=info msg="StartContainer for \"d601939d453e4758dec071c57633d25a3f5d6dfb6a9d0fa76e526f67cab95ed7\" returns successfully" Mar 10 01:32:15.095435 kubelet[2837]: E0310 01:32:15.092535 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:15.125971 kubelet[2837]: I0310 01:32:15.125282 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-zb7m5" podStartSLOduration=7.125263718 podStartE2EDuration="7.125263718s" podCreationTimestamp="2026-03-10 01:32:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:32:11.075828101 +0000 UTC m=+6.470942838" watchObservedRunningTime="2026-03-10 01:32:15.125263718 +0000 UTC m=+10.520378465" Mar 10 01:32:15.161265 kubelet[2837]: I0310 01:32:15.161050 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-rhh5f" podStartSLOduration=1.994566876 podStartE2EDuration="6.161032636s" podCreationTimestamp="2026-03-10 01:32:09 +0000 UTC" firstStartedPulling="2026-03-10 01:32:10.231308722 +0000 UTC m=+5.626423458" lastFinishedPulling="2026-03-10 01:32:14.397774481 +0000 UTC m=+9.792889218" observedRunningTime="2026-03-10 01:32:15.125712533 +0000 UTC m=+10.520827270" watchObservedRunningTime="2026-03-10 01:32:15.161032636 +0000 UTC m=+10.556147373" Mar 10 01:32:15.301496 kubelet[2837]: E0310 01:32:15.299878 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:17.410089 kubelet[2837]: E0310 01:32:17.409551 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:21.650860 sudo[1806]: pam_unix(sudo:session): session closed for user root Mar 10 01:32:21.662318 sshd[1805]: Connection closed by 10.0.0.1 port 47728 Mar 10 01:32:21.671181 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Mar 10 01:32:21.692972 systemd-logind[1543]: Session 9 logged out. Waiting for processes to exit. Mar 10 01:32:21.694456 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:47728.service: Deactivated successfully. Mar 10 01:32:21.706431 systemd[1]: session-9.scope: Deactivated successfully. Mar 10 01:32:21.708530 systemd[1]: session-9.scope: Consumed 8.680s CPU time, 232.4M memory peak. Mar 10 01:32:21.731796 systemd-logind[1543]: Removed session 9. Mar 10 01:32:25.119683 kubelet[2837]: E0310 01:32:25.119171 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:25.316406 kubelet[2837]: E0310 01:32:25.315098 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:27.206089 systemd[1]: Created slice kubepods-besteffort-pod76f96e4a_be3a_4a8d_b2bf_2f7cf8104de6.slice - libcontainer container kubepods-besteffort-pod76f96e4a_be3a_4a8d_b2bf_2f7cf8104de6.slice. Mar 10 01:32:27.262955 kubelet[2837]: I0310 01:32:27.260757 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6-tigera-ca-bundle\") pod \"calico-typha-7d9595f6d4-jt65l\" (UID: \"76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6\") " pod="calico-system/calico-typha-7d9595f6d4-jt65l" Mar 10 01:32:27.270107 kubelet[2837]: I0310 01:32:27.261257 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6-typha-certs\") pod \"calico-typha-7d9595f6d4-jt65l\" (UID: \"76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6\") " pod="calico-system/calico-typha-7d9595f6d4-jt65l" Mar 10 01:32:27.270107 kubelet[2837]: I0310 01:32:27.269441 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k7pq\" (UniqueName: \"kubernetes.io/projected/76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6-kube-api-access-4k7pq\") pod \"calico-typha-7d9595f6d4-jt65l\" (UID: \"76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6\") " pod="calico-system/calico-typha-7d9595f6d4-jt65l" Mar 10 01:32:27.400356 systemd[1]: Created slice kubepods-besteffort-pod3319f798_00a6_4b78_a719_ad9ef3b88f1e.slice - libcontainer container kubepods-besteffort-pod3319f798_00a6_4b78_a719_ad9ef3b88f1e.slice. Mar 10 01:32:27.493135 kubelet[2837]: I0310 01:32:27.493045 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjq2s\" (UniqueName: \"kubernetes.io/projected/3319f798-00a6-4b78-a719-ad9ef3b88f1e-kube-api-access-sjq2s\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.494353 kubelet[2837]: I0310 01:32:27.494298 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-flexvol-driver-host\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.494682 kubelet[2837]: I0310 01:32:27.494542 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-nodeproc\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.495133 kubelet[2837]: I0310 01:32:27.494937 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-policysync\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.495452 kubelet[2837]: I0310 01:32:27.495295 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-xtables-lock\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.495794 kubelet[2837]: I0310 01:32:27.495698 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-cni-log-dir\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.496042 kubelet[2837]: I0310 01:32:27.495939 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3319f798-00a6-4b78-a719-ad9ef3b88f1e-node-certs\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.496296 kubelet[2837]: I0310 01:32:27.496153 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-lib-modules\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.496673 kubelet[2837]: I0310 01:32:27.496406 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-var-lib-calico\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.497279 kubelet[2837]: I0310 01:32:27.496829 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-cni-bin-dir\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.497279 kubelet[2837]: I0310 01:32:27.496969 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-cni-net-dir\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.497279 kubelet[2837]: I0310 01:32:27.496993 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3319f798-00a6-4b78-a719-ad9ef3b88f1e-tigera-ca-bundle\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.497279 kubelet[2837]: I0310 01:32:27.497019 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-bpffs\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.497279 kubelet[2837]: I0310 01:32:27.497089 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-var-run-calico\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.497497 kubelet[2837]: I0310 01:32:27.497135 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/3319f798-00a6-4b78-a719-ad9ef3b88f1e-sys-fs\") pod \"calico-node-w755j\" (UID: \"3319f798-00a6-4b78-a719-ad9ef3b88f1e\") " pod="calico-system/calico-node-w755j" Mar 10 01:32:27.530859 kubelet[2837]: E0310 01:32:27.530764 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:27.534820 containerd[1568]: time="2026-03-10T01:32:27.533544752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9595f6d4-jt65l,Uid:76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6,Namespace:calico-system,Attempt:0,}" Mar 10 01:32:27.618757 kubelet[2837]: E0310 01:32:27.615116 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.618757 kubelet[2837]: W0310 01:32:27.615219 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.618757 kubelet[2837]: E0310 01:32:27.615261 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.621878 kubelet[2837]: E0310 01:32:27.619556 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:27.621878 kubelet[2837]: E0310 01:32:27.621829 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.621878 kubelet[2837]: W0310 01:32:27.621841 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.621878 kubelet[2837]: E0310 01:32:27.621886 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.685701 kubelet[2837]: E0310 01:32:27.684794 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.685701 kubelet[2837]: W0310 01:32:27.684826 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.685701 kubelet[2837]: E0310 01:32:27.684850 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.689736 kubelet[2837]: E0310 01:32:27.689444 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.689736 kubelet[2837]: W0310 01:32:27.689466 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.689736 kubelet[2837]: E0310 01:32:27.689484 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.691848 kubelet[2837]: E0310 01:32:27.691775 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.691848 kubelet[2837]: W0310 01:32:27.691836 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.691955 kubelet[2837]: E0310 01:32:27.691862 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.693400 kubelet[2837]: E0310 01:32:27.693157 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.693400 kubelet[2837]: W0310 01:32:27.693309 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.693400 kubelet[2837]: E0310 01:32:27.693332 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.696644 kubelet[2837]: E0310 01:32:27.695304 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.696644 kubelet[2837]: W0310 01:32:27.695469 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.696644 kubelet[2837]: E0310 01:32:27.695490 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.701800 containerd[1568]: time="2026-03-10T01:32:27.701763752Z" level=info msg="connecting to shim 47e5666546a22dedabf2a518ed23c7397872c033dfabead509f34e69875bfc75" address="unix:///run/containerd/s/1125338974c8c3b5ac9593cdb8335535961da9eb494253e802b07b6fb38d8f9b" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:32:27.703039 kubelet[2837]: E0310 01:32:27.702967 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.703039 kubelet[2837]: W0310 01:32:27.703010 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.703039 kubelet[2837]: E0310 01:32:27.703029 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.705787 kubelet[2837]: E0310 01:32:27.705706 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.705787 kubelet[2837]: W0310 01:32:27.705764 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.705787 kubelet[2837]: E0310 01:32:27.705787 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.708150 kubelet[2837]: E0310 01:32:27.708085 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.708150 kubelet[2837]: W0310 01:32:27.708125 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.708150 kubelet[2837]: E0310 01:32:27.708140 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.715806 kubelet[2837]: E0310 01:32:27.715322 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.716944 kubelet[2837]: W0310 01:32:27.716798 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.717488 kubelet[2837]: E0310 01:32:27.717330 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.723854 kubelet[2837]: E0310 01:32:27.723151 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.725041 kubelet[2837]: W0310 01:32:27.724860 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.725698 kubelet[2837]: E0310 01:32:27.725335 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.727943 kubelet[2837]: E0310 01:32:27.727865 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.727943 kubelet[2837]: W0310 01:32:27.727930 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.728124 kubelet[2837]: E0310 01:32:27.727960 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.731520 kubelet[2837]: E0310 01:32:27.731121 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.731898 kubelet[2837]: W0310 01:32:27.731775 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.731955 kubelet[2837]: E0310 01:32:27.731900 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.735455 kubelet[2837]: E0310 01:32:27.734526 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.735455 kubelet[2837]: W0310 01:32:27.734550 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.735455 kubelet[2837]: E0310 01:32:27.734656 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.737711 kubelet[2837]: E0310 01:32:27.737496 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.737711 kubelet[2837]: W0310 01:32:27.737549 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.738430 kubelet[2837]: E0310 01:32:27.738232 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.740846 kubelet[2837]: E0310 01:32:27.739884 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.740846 kubelet[2837]: W0310 01:32:27.739944 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.740846 kubelet[2837]: E0310 01:32:27.739965 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.742363 kubelet[2837]: E0310 01:32:27.742292 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.742363 kubelet[2837]: W0310 01:32:27.742341 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.742363 kubelet[2837]: E0310 01:32:27.742361 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.743886 kubelet[2837]: E0310 01:32:27.743680 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.743886 kubelet[2837]: W0310 01:32:27.743732 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.743886 kubelet[2837]: E0310 01:32:27.743752 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.747747 kubelet[2837]: E0310 01:32:27.747646 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.747747 kubelet[2837]: W0310 01:32:27.747698 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.747747 kubelet[2837]: E0310 01:32:27.747719 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.748966 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.751418 kubelet[2837]: W0310 01:32:27.748979 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.748995 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.749466 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.751418 kubelet[2837]: W0310 01:32:27.749477 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.749489 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.749978 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.751418 kubelet[2837]: W0310 01:32:27.749988 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.750000 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.751418 kubelet[2837]: E0310 01:32:27.750517 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.751839 kubelet[2837]: W0310 01:32:27.750527 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.751839 kubelet[2837]: E0310 01:32:27.750539 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.751839 kubelet[2837]: I0310 01:32:27.750658 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6470ee27-1ae0-4c37-bfc5-73aa0f2ec825-registration-dir\") pod \"csi-node-driver-xlrx5\" (UID: \"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825\") " pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:32:27.751839 kubelet[2837]: E0310 01:32:27.750965 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.751839 kubelet[2837]: W0310 01:32:27.750976 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.751839 kubelet[2837]: E0310 01:32:27.750987 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.751839 kubelet[2837]: I0310 01:32:27.751035 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6470ee27-1ae0-4c37-bfc5-73aa0f2ec825-varrun\") pod \"csi-node-driver-xlrx5\" (UID: \"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825\") " pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:32:27.751839 kubelet[2837]: E0310 01:32:27.751401 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.752070 kubelet[2837]: W0310 01:32:27.751415 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.752070 kubelet[2837]: E0310 01:32:27.751426 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.752070 kubelet[2837]: I0310 01:32:27.751451 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z97j2\" (UniqueName: \"kubernetes.io/projected/6470ee27-1ae0-4c37-bfc5-73aa0f2ec825-kube-api-access-z97j2\") pod \"csi-node-driver-xlrx5\" (UID: \"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825\") " pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:32:27.752070 kubelet[2837]: E0310 01:32:27.751919 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.752070 kubelet[2837]: W0310 01:32:27.751932 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.752070 kubelet[2837]: E0310 01:32:27.751943 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.752524 kubelet[2837]: I0310 01:32:27.751961 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6470ee27-1ae0-4c37-bfc5-73aa0f2ec825-socket-dir\") pod \"csi-node-driver-xlrx5\" (UID: \"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825\") " pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:32:27.753434 kubelet[2837]: E0310 01:32:27.753335 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.753434 kubelet[2837]: W0310 01:32:27.753383 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.753434 kubelet[2837]: E0310 01:32:27.753399 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.754349 kubelet[2837]: E0310 01:32:27.754237 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.754349 kubelet[2837]: W0310 01:32:27.754260 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.755734 kubelet[2837]: E0310 01:32:27.754936 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.755934 kubelet[2837]: E0310 01:32:27.755801 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.755934 kubelet[2837]: W0310 01:32:27.755814 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.755934 kubelet[2837]: E0310 01:32:27.755828 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.756382 containerd[1568]: time="2026-03-10T01:32:27.756117353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w755j,Uid:3319f798-00a6-4b78-a719-ad9ef3b88f1e,Namespace:calico-system,Attempt:0,}" Mar 10 01:32:27.756544 kubelet[2837]: E0310 01:32:27.756429 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.756544 kubelet[2837]: W0310 01:32:27.756442 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.756544 kubelet[2837]: E0310 01:32:27.756454 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.757407 kubelet[2837]: E0310 01:32:27.757332 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.757508 kubelet[2837]: W0310 01:32:27.757481 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.757508 kubelet[2837]: E0310 01:32:27.757498 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.758772 kubelet[2837]: E0310 01:32:27.758523 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.758772 kubelet[2837]: W0310 01:32:27.758536 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.758772 kubelet[2837]: E0310 01:32:27.758687 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.759443 kubelet[2837]: I0310 01:32:27.758984 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6470ee27-1ae0-4c37-bfc5-73aa0f2ec825-kubelet-dir\") pod \"csi-node-driver-xlrx5\" (UID: \"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825\") " pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:32:27.760102 kubelet[2837]: E0310 01:32:27.760010 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.761914 kubelet[2837]: W0310 01:32:27.760159 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.761914 kubelet[2837]: E0310 01:32:27.760322 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.761914 kubelet[2837]: E0310 01:32:27.761759 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.761914 kubelet[2837]: W0310 01:32:27.761774 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.761914 kubelet[2837]: E0310 01:32:27.761788 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.764230 kubelet[2837]: E0310 01:32:27.764040 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.764313 kubelet[2837]: W0310 01:32:27.764165 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.764313 kubelet[2837]: E0310 01:32:27.764290 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.765885 kubelet[2837]: E0310 01:32:27.765814 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.765885 kubelet[2837]: W0310 01:32:27.765845 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.765885 kubelet[2837]: E0310 01:32:27.765856 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.775478 kubelet[2837]: E0310 01:32:27.768853 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.775478 kubelet[2837]: W0310 01:32:27.768873 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.775478 kubelet[2837]: E0310 01:32:27.768887 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.862061 systemd[1]: Started cri-containerd-47e5666546a22dedabf2a518ed23c7397872c033dfabead509f34e69875bfc75.scope - libcontainer container 47e5666546a22dedabf2a518ed23c7397872c033dfabead509f34e69875bfc75. Mar 10 01:32:27.867832 kubelet[2837]: E0310 01:32:27.865991 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.867832 kubelet[2837]: W0310 01:32:27.866018 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.867832 kubelet[2837]: E0310 01:32:27.866045 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.867832 kubelet[2837]: E0310 01:32:27.867100 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.867832 kubelet[2837]: W0310 01:32:27.867114 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.867832 kubelet[2837]: E0310 01:32:27.867136 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.868464 kubelet[2837]: E0310 01:32:27.868375 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.868464 kubelet[2837]: W0310 01:32:27.868389 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.868464 kubelet[2837]: E0310 01:32:27.868406 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.875769 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.895984 kubelet[2837]: W0310 01:32:27.876539 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.877891 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.887659 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.895984 kubelet[2837]: W0310 01:32:27.887845 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.888251 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.888792 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.895984 kubelet[2837]: W0310 01:32:27.888805 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.888822 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.895984 kubelet[2837]: E0310 01:32:27.889657 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.913905 kubelet[2837]: W0310 01:32:27.889671 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.913905 kubelet[2837]: E0310 01:32:27.889686 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.913905 kubelet[2837]: E0310 01:32:27.890534 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.913905 kubelet[2837]: W0310 01:32:27.890546 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.913905 kubelet[2837]: E0310 01:32:27.890645 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.913905 kubelet[2837]: E0310 01:32:27.891418 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.913905 kubelet[2837]: W0310 01:32:27.891429 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.913905 kubelet[2837]: E0310 01:32:27.891442 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.913905 kubelet[2837]: E0310 01:32:27.892295 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.913905 kubelet[2837]: W0310 01:32:27.892307 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.892319 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.893468 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.915139 kubelet[2837]: W0310 01:32:27.893479 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.893550 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.894394 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.915139 kubelet[2837]: W0310 01:32:27.894407 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.894418 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.895282 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.915139 kubelet[2837]: W0310 01:32:27.895294 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.915139 kubelet[2837]: E0310 01:32:27.895310 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.897059 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.921489 kubelet[2837]: W0310 01:32:27.897072 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.897085 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.898834 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.921489 kubelet[2837]: W0310 01:32:27.898845 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.898857 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.901314 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.921489 kubelet[2837]: W0310 01:32:27.901332 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.901347 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.921489 kubelet[2837]: E0310 01:32:27.901979 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.922050 containerd[1568]: time="2026-03-10T01:32:27.919551448Z" level=info msg="connecting to shim cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf" address="unix:///run/containerd/s/ceda13777fbcc68ccaefe385caa712a6d7670f7d9117625b6c3ff13262f9d9b0" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:32:27.922136 kubelet[2837]: W0310 01:32:27.901995 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.922136 kubelet[2837]: E0310 01:32:27.902011 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.922136 kubelet[2837]: E0310 01:32:27.903792 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.922136 kubelet[2837]: W0310 01:32:27.903806 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.922136 kubelet[2837]: E0310 01:32:27.903911 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.922136 kubelet[2837]: E0310 01:32:27.904384 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.922136 kubelet[2837]: W0310 01:32:27.904393 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.922136 kubelet[2837]: E0310 01:32:27.904403 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.922136 kubelet[2837]: E0310 01:32:27.904809 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.922136 kubelet[2837]: W0310 01:32:27.904820 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.904835 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.905326 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.926551 kubelet[2837]: W0310 01:32:27.905336 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.905345 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.905951 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.926551 kubelet[2837]: W0310 01:32:27.905963 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.905977 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.906857 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.926551 kubelet[2837]: W0310 01:32:27.906868 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.926551 kubelet[2837]: E0310 01:32:27.906879 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.927935 kubelet[2837]: E0310 01:32:27.908379 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.927935 kubelet[2837]: W0310 01:32:27.908392 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.927935 kubelet[2837]: E0310 01:32:27.908404 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.927935 kubelet[2837]: E0310 01:32:27.911057 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.927935 kubelet[2837]: W0310 01:32:27.911071 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.927935 kubelet[2837]: E0310 01:32:27.911085 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:27.945881 kubelet[2837]: E0310 01:32:27.945764 2837 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 10 01:32:27.945881 kubelet[2837]: W0310 01:32:27.945798 2837 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 10 01:32:27.945881 kubelet[2837]: E0310 01:32:27.945824 2837 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 10 01:32:28.018477 systemd[1]: Started cri-containerd-cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf.scope - libcontainer container cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf. Mar 10 01:32:28.160986 containerd[1568]: time="2026-03-10T01:32:28.160479044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w755j,Uid:3319f798-00a6-4b78-a719-ad9ef3b88f1e,Namespace:calico-system,Attempt:0,} returns sandbox id \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\"" Mar 10 01:32:28.165137 containerd[1568]: time="2026-03-10T01:32:28.163928733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d9595f6d4-jt65l,Uid:76f96e4a-be3a-4a8d-b2bf-2f7cf8104de6,Namespace:calico-system,Attempt:0,} returns sandbox id \"47e5666546a22dedabf2a518ed23c7397872c033dfabead509f34e69875bfc75\"" Mar 10 01:32:28.180538 kubelet[2837]: E0310 01:32:28.180397 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:28.183074 containerd[1568]: time="2026-03-10T01:32:28.183022566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 10 01:32:28.908357 kubelet[2837]: E0310 01:32:28.908107 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:29.236187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount300151125.mount: Deactivated successfully. Mar 10 01:32:29.499322 containerd[1568]: time="2026-03-10T01:32:29.498499660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:29.500112 containerd[1568]: time="2026-03-10T01:32:29.500064159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 10 01:32:29.503977 containerd[1568]: time="2026-03-10T01:32:29.502050694Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:29.506349 containerd[1568]: time="2026-03-10T01:32:29.506256286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:29.507177 containerd[1568]: time="2026-03-10T01:32:29.507044792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.323708377s" Mar 10 01:32:29.507177 containerd[1568]: time="2026-03-10T01:32:29.507080939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 10 01:32:29.509017 containerd[1568]: time="2026-03-10T01:32:29.508845095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 10 01:32:29.524356 containerd[1568]: time="2026-03-10T01:32:29.524022089Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 10 01:32:29.553467 containerd[1568]: time="2026-03-10T01:32:29.553377322Z" level=info msg="Container 86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:32:29.583042 containerd[1568]: time="2026-03-10T01:32:29.582917511Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9\"" Mar 10 01:32:29.584326 containerd[1568]: time="2026-03-10T01:32:29.584221654Z" level=info msg="StartContainer for \"86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9\"" Mar 10 01:32:29.586121 containerd[1568]: time="2026-03-10T01:32:29.586059376Z" level=info msg="connecting to shim 86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9" address="unix:///run/containerd/s/ceda13777fbcc68ccaefe385caa712a6d7670f7d9117625b6c3ff13262f9d9b0" protocol=ttrpc version=3 Mar 10 01:32:29.629917 systemd[1]: Started cri-containerd-86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9.scope - libcontainer container 86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9. Mar 10 01:32:29.937728 containerd[1568]: time="2026-03-10T01:32:29.937491556Z" level=info msg="StartContainer for \"86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9\" returns successfully" Mar 10 01:32:29.971812 systemd[1]: cri-containerd-86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9.scope: Deactivated successfully. Mar 10 01:32:29.974888 containerd[1568]: time="2026-03-10T01:32:29.974848195Z" level=info msg="received container exit event container_id:\"86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9\" id:\"86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9\" pid:3453 exited_at:{seconds:1773106349 nanos:973977429}" Mar 10 01:32:30.039107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86ed4dbe5540b5b8db09cae1ffcee050bc352beb5ac4509f543a8abaaef128f9-rootfs.mount: Deactivated successfully. Mar 10 01:32:30.908886 kubelet[2837]: E0310 01:32:30.904497 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:32.912856 kubelet[2837]: E0310 01:32:32.912389 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:34.690443 containerd[1568]: time="2026-03-10T01:32:34.689492574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:34.691803 containerd[1568]: time="2026-03-10T01:32:34.690887522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 10 01:32:34.694159 containerd[1568]: time="2026-03-10T01:32:34.693767556Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:34.706397 containerd[1568]: time="2026-03-10T01:32:34.705419384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:34.706811 containerd[1568]: time="2026-03-10T01:32:34.706619609Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 5.197709725s" Mar 10 01:32:34.706811 containerd[1568]: time="2026-03-10T01:32:34.706654553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 10 01:32:34.721245 containerd[1568]: time="2026-03-10T01:32:34.720793272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 10 01:32:34.755203 containerd[1568]: time="2026-03-10T01:32:34.754954174Z" level=info msg="CreateContainer within sandbox \"47e5666546a22dedabf2a518ed23c7397872c033dfabead509f34e69875bfc75\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 10 01:32:34.787658 containerd[1568]: time="2026-03-10T01:32:34.787309761Z" level=info msg="Container c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:32:34.854755 containerd[1568]: time="2026-03-10T01:32:34.853900896Z" level=info msg="CreateContainer within sandbox \"47e5666546a22dedabf2a518ed23c7397872c033dfabead509f34e69875bfc75\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504\"" Mar 10 01:32:34.854755 containerd[1568]: time="2026-03-10T01:32:34.854845911Z" level=info msg="StartContainer for \"c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504\"" Mar 10 01:32:34.858859 containerd[1568]: time="2026-03-10T01:32:34.858629003Z" level=info msg="connecting to shim c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504" address="unix:///run/containerd/s/1125338974c8c3b5ac9593cdb8335535961da9eb494253e802b07b6fb38d8f9b" protocol=ttrpc version=3 Mar 10 01:32:34.905244 kubelet[2837]: E0310 01:32:34.903904 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:34.951308 systemd[1]: Started cri-containerd-c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504.scope - libcontainer container c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504. Mar 10 01:32:35.181445 containerd[1568]: time="2026-03-10T01:32:35.181150937Z" level=info msg="StartContainer for \"c2a6f3be50f372199304b2629c19adcc5ebeb61a6bea09c725ae65586b2ad504\" returns successfully" Mar 10 01:32:35.337101 kubelet[2837]: E0310 01:32:35.336171 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:35.410887 kubelet[2837]: I0310 01:32:35.406347 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-7d9595f6d4-jt65l" podStartSLOduration=1.869548319 podStartE2EDuration="8.406328616s" podCreationTimestamp="2026-03-10 01:32:27 +0000 UTC" firstStartedPulling="2026-03-10 01:32:28.182990444 +0000 UTC m=+23.578105182" lastFinishedPulling="2026-03-10 01:32:34.719770702 +0000 UTC m=+30.114885479" observedRunningTime="2026-03-10 01:32:35.402199622 +0000 UTC m=+30.797314379" watchObservedRunningTime="2026-03-10 01:32:35.406328616 +0000 UTC m=+30.801443383" Mar 10 01:32:36.344705 kubelet[2837]: I0310 01:32:36.344456 2837 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:32:36.345311 kubelet[2837]: E0310 01:32:36.345250 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:36.905919 kubelet[2837]: E0310 01:32:36.904159 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:38.906389 kubelet[2837]: E0310 01:32:38.904880 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:40.379293 kubelet[2837]: I0310 01:32:40.378865 2837 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:32:40.382194 kubelet[2837]: E0310 01:32:40.382153 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:40.903785 kubelet[2837]: E0310 01:32:40.903386 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:41.385232 kubelet[2837]: E0310 01:32:41.385194 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:32:42.905749 kubelet[2837]: E0310 01:32:42.905374 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:44.931289 kubelet[2837]: E0310 01:32:44.924548 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:46.904651 kubelet[2837]: E0310 01:32:46.904334 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:48.903839 kubelet[2837]: E0310 01:32:48.903066 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:50.417540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521415292.mount: Deactivated successfully. Mar 10 01:32:50.492671 containerd[1568]: time="2026-03-10T01:32:50.492112934Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:50.494863 containerd[1568]: time="2026-03-10T01:32:50.494826009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 10 01:32:50.497822 containerd[1568]: time="2026-03-10T01:32:50.497525230Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:50.506172 containerd[1568]: time="2026-03-10T01:32:50.506085812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:32:50.507535 containerd[1568]: time="2026-03-10T01:32:50.506846734Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 15.786004882s" Mar 10 01:32:50.507535 containerd[1568]: time="2026-03-10T01:32:50.506892718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 10 01:32:50.527679 containerd[1568]: time="2026-03-10T01:32:50.526830026Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 10 01:32:50.573875 containerd[1568]: time="2026-03-10T01:32:50.573769329Z" level=info msg="Container 215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:32:50.650433 containerd[1568]: time="2026-03-10T01:32:50.650308172Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251\"" Mar 10 01:32:50.653682 containerd[1568]: time="2026-03-10T01:32:50.651801033Z" level=info msg="StartContainer for \"215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251\"" Mar 10 01:32:50.654038 containerd[1568]: time="2026-03-10T01:32:50.654013688Z" level=info msg="connecting to shim 215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251" address="unix:///run/containerd/s/ceda13777fbcc68ccaefe385caa712a6d7670f7d9117625b6c3ff13262f9d9b0" protocol=ttrpc version=3 Mar 10 01:32:50.719196 systemd[1]: Started cri-containerd-215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251.scope - libcontainer container 215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251. Mar 10 01:32:50.889053 containerd[1568]: time="2026-03-10T01:32:50.888887182Z" level=info msg="StartContainer for \"215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251\" returns successfully" Mar 10 01:32:50.912061 kubelet[2837]: E0310 01:32:50.911846 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:51.031195 systemd[1]: cri-containerd-215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251.scope: Deactivated successfully. Mar 10 01:32:51.056903 containerd[1568]: time="2026-03-10T01:32:51.056801756Z" level=info msg="received container exit event container_id:\"215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251\" id:\"215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251\" pid:3556 exited_at:{seconds:1773106371 nanos:33402867}" Mar 10 01:32:51.305135 containerd[1568]: time="2026-03-10T01:32:51.304828649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 10 01:32:51.413547 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-215536a87bdb1b9861ceab500af4f2c198d3c8485944a1f716debfccb51d9251-rootfs.mount: Deactivated successfully. Mar 10 01:32:52.907435 kubelet[2837]: E0310 01:32:52.904388 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:54.912825 kubelet[2837]: E0310 01:32:54.909257 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:57.546981 kubelet[2837]: E0310 01:32:57.502178 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:32:59.847055 kubelet[2837]: E0310 01:32:59.846004 2837 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.843s" Mar 10 01:33:00.334661 kubelet[2837]: E0310 01:33:00.331429 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:33:01.903732 kubelet[2837]: E0310 01:33:01.903323 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:33:02.184395 containerd[1568]: time="2026-03-10T01:33:02.174403554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:02.188548 containerd[1568]: time="2026-03-10T01:33:02.188471322Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 10 01:33:02.192138 containerd[1568]: time="2026-03-10T01:33:02.192058892Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:02.199340 containerd[1568]: time="2026-03-10T01:33:02.199118656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:02.203074 containerd[1568]: time="2026-03-10T01:33:02.202841036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 10.897925094s" Mar 10 01:33:02.203074 containerd[1568]: time="2026-03-10T01:33:02.202931073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 10 01:33:02.222213 containerd[1568]: time="2026-03-10T01:33:02.221725511Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 10 01:33:02.260167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1158810409.mount: Deactivated successfully. Mar 10 01:33:02.264558 containerd[1568]: time="2026-03-10T01:33:02.263711969Z" level=info msg="Container 9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:02.320754 containerd[1568]: time="2026-03-10T01:33:02.320537842Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e\"" Mar 10 01:33:02.324764 containerd[1568]: time="2026-03-10T01:33:02.324674121Z" level=info msg="StartContainer for \"9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e\"" Mar 10 01:33:02.331704 containerd[1568]: time="2026-03-10T01:33:02.331533662Z" level=info msg="connecting to shim 9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e" address="unix:///run/containerd/s/ceda13777fbcc68ccaefe385caa712a6d7670f7d9117625b6c3ff13262f9d9b0" protocol=ttrpc version=3 Mar 10 01:33:02.432549 systemd[1]: Started cri-containerd-9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e.scope - libcontainer container 9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e. Mar 10 01:33:02.699768 containerd[1568]: time="2026-03-10T01:33:02.699624573Z" level=info msg="StartContainer for \"9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e\" returns successfully" Mar 10 01:33:03.905142 kubelet[2837]: E0310 01:33:03.904412 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:33:04.290814 systemd[1]: cri-containerd-9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e.scope: Deactivated successfully. Mar 10 01:33:04.292317 systemd[1]: cri-containerd-9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e.scope: Consumed 1.242s CPU time, 183.5M memory peak, 4M read from disk, 177M written to disk. Mar 10 01:33:04.310235 containerd[1568]: time="2026-03-10T01:33:04.310021576Z" level=info msg="received container exit event container_id:\"9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e\" id:\"9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e\" pid:3617 exited_at:{seconds:1773106384 nanos:309457566}" Mar 10 01:33:04.393965 kubelet[2837]: I0310 01:33:04.390910 2837 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 10 01:33:04.418446 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c87a9c6cef006ba87f62e630b2659d22a1973d7c4b6d9afd7e00a4d7b47fe2e-rootfs.mount: Deactivated successfully. Mar 10 01:33:04.591719 systemd[1]: Created slice kubepods-besteffort-pod09f04862_7ea7_4cf7_9b9c_71c321b7fda5.slice - libcontainer container kubepods-besteffort-pod09f04862_7ea7_4cf7_9b9c_71c321b7fda5.slice. Mar 10 01:33:04.599815 kubelet[2837]: I0310 01:33:04.598685 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qcnr\" (UniqueName: \"kubernetes.io/projected/537d9f6c-6f13-4a20-aa3b-d04712aaf478-kube-api-access-4qcnr\") pod \"calico-apiserver-7ff485cc5f-x8658\" (UID: \"537d9f6c-6f13-4a20-aa3b-d04712aaf478\") " pod="calico-system/calico-apiserver-7ff485cc5f-x8658" Mar 10 01:33:04.599815 kubelet[2837]: I0310 01:33:04.598729 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d5r7\" (UniqueName: \"kubernetes.io/projected/19ece181-70dc-4566-932d-df7e48989fd7-kube-api-access-7d5r7\") pod \"coredns-7d764666f9-bsdj4\" (UID: \"19ece181-70dc-4566-932d-df7e48989fd7\") " pod="kube-system/coredns-7d764666f9-bsdj4" Mar 10 01:33:04.599815 kubelet[2837]: I0310 01:33:04.598759 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09f04862-7ea7-4cf7-9b9c-71c321b7fda5-tigera-ca-bundle\") pod \"calico-kube-controllers-545475ff5b-79bsc\" (UID: \"09f04862-7ea7-4cf7-9b9c-71c321b7fda5\") " pod="calico-system/calico-kube-controllers-545475ff5b-79bsc" Mar 10 01:33:04.599815 kubelet[2837]: I0310 01:33:04.598784 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19ece181-70dc-4566-932d-df7e48989fd7-config-volume\") pod \"coredns-7d764666f9-bsdj4\" (UID: \"19ece181-70dc-4566-932d-df7e48989fd7\") " pod="kube-system/coredns-7d764666f9-bsdj4" Mar 10 01:33:04.599815 kubelet[2837]: I0310 01:33:04.598834 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5wmc\" (UniqueName: \"kubernetes.io/projected/09f04862-7ea7-4cf7-9b9c-71c321b7fda5-kube-api-access-t5wmc\") pod \"calico-kube-controllers-545475ff5b-79bsc\" (UID: \"09f04862-7ea7-4cf7-9b9c-71c321b7fda5\") " pod="calico-system/calico-kube-controllers-545475ff5b-79bsc" Mar 10 01:33:04.602790 kubelet[2837]: I0310 01:33:04.602708 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/537d9f6c-6f13-4a20-aa3b-d04712aaf478-calico-apiserver-certs\") pod \"calico-apiserver-7ff485cc5f-x8658\" (UID: \"537d9f6c-6f13-4a20-aa3b-d04712aaf478\") " pod="calico-system/calico-apiserver-7ff485cc5f-x8658" Mar 10 01:33:04.610467 systemd[1]: Created slice kubepods-besteffort-pod537d9f6c_6f13_4a20_aa3b_d04712aaf478.slice - libcontainer container kubepods-besteffort-pod537d9f6c_6f13_4a20_aa3b_d04712aaf478.slice. Mar 10 01:33:04.646513 systemd[1]: Created slice kubepods-burstable-pod19ece181_70dc_4566_932d_df7e48989fd7.slice - libcontainer container kubepods-burstable-pod19ece181_70dc_4566_932d_df7e48989fd7.slice. Mar 10 01:33:04.677763 systemd[1]: Created slice kubepods-besteffort-pod8bb68f61_585f_4b44_94f1_afbdee8dd54f.slice - libcontainer container kubepods-besteffort-pod8bb68f61_585f_4b44_94f1_afbdee8dd54f.slice. Mar 10 01:33:04.697263 systemd[1]: Created slice kubepods-besteffort-pod975ff6bc_9d37_4c8b_a404_eec5837ce86d.slice - libcontainer container kubepods-besteffort-pod975ff6bc_9d37_4c8b_a404_eec5837ce86d.slice. Mar 10 01:33:04.709409 kubelet[2837]: I0310 01:33:04.709212 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdkw\" (UniqueName: \"kubernetes.io/projected/c11513cf-a76c-4fa1-a5ad-bd942108eb0e-kube-api-access-czdkw\") pod \"coredns-7d764666f9-sb4dn\" (UID: \"c11513cf-a76c-4fa1-a5ad-bd942108eb0e\") " pod="kube-system/coredns-7d764666f9-sb4dn" Mar 10 01:33:04.709409 kubelet[2837]: I0310 01:33:04.709325 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/975ff6bc-9d37-4c8b-a404-eec5837ce86d-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-b8tsw\" (UID: \"975ff6bc-9d37-4c8b-a404-eec5837ce86d\") " pod="calico-system/goldmane-9f7667bb8-b8tsw" Mar 10 01:33:04.709409 kubelet[2837]: I0310 01:33:04.709368 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/975ff6bc-9d37-4c8b-a404-eec5837ce86d-goldmane-key-pair\") pod \"goldmane-9f7667bb8-b8tsw\" (UID: \"975ff6bc-9d37-4c8b-a404-eec5837ce86d\") " pod="calico-system/goldmane-9f7667bb8-b8tsw" Mar 10 01:33:04.709409 kubelet[2837]: I0310 01:33:04.709393 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-backend-key-pair\") pod \"whisker-58db4754d4-4gs2w\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " pod="calico-system/whisker-58db4754d4-4gs2w" Mar 10 01:33:04.714446 kubelet[2837]: I0310 01:33:04.709419 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8bb68f61-585f-4b44-94f1-afbdee8dd54f-calico-apiserver-certs\") pod \"calico-apiserver-7ff485cc5f-cdn75\" (UID: \"8bb68f61-585f-4b44-94f1-afbdee8dd54f\") " pod="calico-system/calico-apiserver-7ff485cc5f-cdn75" Mar 10 01:33:04.714446 kubelet[2837]: I0310 01:33:04.709432 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/975ff6bc-9d37-4c8b-a404-eec5837ce86d-config\") pod \"goldmane-9f7667bb8-b8tsw\" (UID: \"975ff6bc-9d37-4c8b-a404-eec5837ce86d\") " pod="calico-system/goldmane-9f7667bb8-b8tsw" Mar 10 01:33:04.714446 kubelet[2837]: I0310 01:33:04.709447 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6hnn\" (UniqueName: \"kubernetes.io/projected/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-kube-api-access-j6hnn\") pod \"whisker-58db4754d4-4gs2w\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " pod="calico-system/whisker-58db4754d4-4gs2w" Mar 10 01:33:04.714446 kubelet[2837]: I0310 01:33:04.709462 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f28t5\" (UniqueName: \"kubernetes.io/projected/8bb68f61-585f-4b44-94f1-afbdee8dd54f-kube-api-access-f28t5\") pod \"calico-apiserver-7ff485cc5f-cdn75\" (UID: \"8bb68f61-585f-4b44-94f1-afbdee8dd54f\") " pod="calico-system/calico-apiserver-7ff485cc5f-cdn75" Mar 10 01:33:04.714446 kubelet[2837]: I0310 01:33:04.709474 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-nginx-config\") pod \"whisker-58db4754d4-4gs2w\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " pod="calico-system/whisker-58db4754d4-4gs2w" Mar 10 01:33:04.719015 kubelet[2837]: I0310 01:33:04.709496 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c11513cf-a76c-4fa1-a5ad-bd942108eb0e-config-volume\") pod \"coredns-7d764666f9-sb4dn\" (UID: \"c11513cf-a76c-4fa1-a5ad-bd942108eb0e\") " pod="kube-system/coredns-7d764666f9-sb4dn" Mar 10 01:33:04.719015 kubelet[2837]: I0310 01:33:04.709510 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5tk\" (UniqueName: \"kubernetes.io/projected/975ff6bc-9d37-4c8b-a404-eec5837ce86d-kube-api-access-nm5tk\") pod \"goldmane-9f7667bb8-b8tsw\" (UID: \"975ff6bc-9d37-4c8b-a404-eec5837ce86d\") " pod="calico-system/goldmane-9f7667bb8-b8tsw" Mar 10 01:33:04.719015 kubelet[2837]: I0310 01:33:04.709525 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-ca-bundle\") pod \"whisker-58db4754d4-4gs2w\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " pod="calico-system/whisker-58db4754d4-4gs2w" Mar 10 01:33:04.719057 systemd[1]: Created slice kubepods-burstable-podc11513cf_a76c_4fa1_a5ad_bd942108eb0e.slice - libcontainer container kubepods-burstable-podc11513cf_a76c_4fa1_a5ad_bd942108eb0e.slice. Mar 10 01:33:04.746792 systemd[1]: Created slice kubepods-besteffort-pod81237aa7_ecdd_4b1d_813b_c25c2056b4e3.slice - libcontainer container kubepods-besteffort-pod81237aa7_ecdd_4b1d_813b_c25c2056b4e3.slice. Mar 10 01:33:04.913367 containerd[1568]: time="2026-03-10T01:33:04.912325076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545475ff5b-79bsc,Uid:09f04862-7ea7-4cf7-9b9c-71c321b7fda5,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:04.943660 containerd[1568]: time="2026-03-10T01:33:04.941794996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-x8658,Uid:537d9f6c-6f13-4a20-aa3b-d04712aaf478,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:04.979223 kubelet[2837]: E0310 01:33:04.978684 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:04.990036 containerd[1568]: time="2026-03-10T01:33:04.989754478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bsdj4,Uid:19ece181-70dc-4566-932d-df7e48989fd7,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:05.015925 containerd[1568]: time="2026-03-10T01:33:05.015810592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-cdn75,Uid:8bb68f61-585f-4b44-94f1-afbdee8dd54f,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:05.034409 containerd[1568]: time="2026-03-10T01:33:05.033824784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b8tsw,Uid:975ff6bc-9d37-4c8b-a404-eec5837ce86d,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:05.039687 kubelet[2837]: E0310 01:33:05.039404 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:05.041533 containerd[1568]: time="2026-03-10T01:33:05.041212612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-sb4dn,Uid:c11513cf-a76c-4fa1-a5ad-bd942108eb0e,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:05.125701 containerd[1568]: time="2026-03-10T01:33:05.125473282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58db4754d4-4gs2w,Uid:81237aa7-ecdd-4b1d-813b-c25c2056b4e3,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:05.423793 containerd[1568]: time="2026-03-10T01:33:05.423624280Z" level=error msg="Failed to destroy network for sandbox \"cd6f4ce6b65408e752c8afd7c2e96f889e75b5733e09e86aa61e9ab77d66f3de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.439513 containerd[1568]: time="2026-03-10T01:33:05.439305438Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b8tsw,Uid:975ff6bc-9d37-4c8b-a404-eec5837ce86d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6f4ce6b65408e752c8afd7c2e96f889e75b5733e09e86aa61e9ab77d66f3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.448995 systemd[1]: run-netns-cni\x2d488c8b7b\x2d35a9\x2db152\x2da2c9\x2de6c4240eb721.mount: Deactivated successfully. Mar 10 01:33:05.461276 kubelet[2837]: E0310 01:33:05.460767 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6f4ce6b65408e752c8afd7c2e96f889e75b5733e09e86aa61e9ab77d66f3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.461276 kubelet[2837]: E0310 01:33:05.461246 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6f4ce6b65408e752c8afd7c2e96f889e75b5733e09e86aa61e9ab77d66f3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-b8tsw" Mar 10 01:33:05.462749 kubelet[2837]: E0310 01:33:05.461276 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd6f4ce6b65408e752c8afd7c2e96f889e75b5733e09e86aa61e9ab77d66f3de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-b8tsw" Mar 10 01:33:05.462749 kubelet[2837]: E0310 01:33:05.461919 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-b8tsw_calico-system(975ff6bc-9d37-4c8b-a404-eec5837ce86d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-b8tsw_calico-system(975ff6bc-9d37-4c8b-a404-eec5837ce86d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd6f4ce6b65408e752c8afd7c2e96f889e75b5733e09e86aa61e9ab77d66f3de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-b8tsw" podUID="975ff6bc-9d37-4c8b-a404-eec5837ce86d" Mar 10 01:33:05.507024 containerd[1568]: time="2026-03-10T01:33:05.506712992Z" level=error msg="Failed to destroy network for sandbox \"55bfaa1545d5c314ffb240395e1ec2b2e87d50e31d221124807d316cda5f7e6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.513034 systemd[1]: run-netns-cni\x2dd11d6b2c\x2d8650\x2d58ac\x2d2c29\x2dfc6ac4896c33.mount: Deactivated successfully. Mar 10 01:33:05.519348 containerd[1568]: time="2026-03-10T01:33:05.518668717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58db4754d4-4gs2w,Uid:81237aa7-ecdd-4b1d-813b-c25c2056b4e3,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bfaa1545d5c314ffb240395e1ec2b2e87d50e31d221124807d316cda5f7e6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.519708 kubelet[2837]: E0310 01:33:05.519546 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bfaa1545d5c314ffb240395e1ec2b2e87d50e31d221124807d316cda5f7e6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.519708 kubelet[2837]: E0310 01:33:05.519698 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bfaa1545d5c314ffb240395e1ec2b2e87d50e31d221124807d316cda5f7e6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58db4754d4-4gs2w" Mar 10 01:33:05.519825 kubelet[2837]: E0310 01:33:05.519724 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55bfaa1545d5c314ffb240395e1ec2b2e87d50e31d221124807d316cda5f7e6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58db4754d4-4gs2w" Mar 10 01:33:05.519825 kubelet[2837]: E0310 01:33:05.519785 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58db4754d4-4gs2w_calico-system(81237aa7-ecdd-4b1d-813b-c25c2056b4e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58db4754d4-4gs2w_calico-system(81237aa7-ecdd-4b1d-813b-c25c2056b4e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55bfaa1545d5c314ffb240395e1ec2b2e87d50e31d221124807d316cda5f7e6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58db4754d4-4gs2w" podUID="81237aa7-ecdd-4b1d-813b-c25c2056b4e3" Mar 10 01:33:05.576832 containerd[1568]: time="2026-03-10T01:33:05.575526611Z" level=error msg="Failed to destroy network for sandbox \"d99e2120a0cf09fe4a46bc6e6c85bd7894d90016ba89b48b7d38c55f59653f98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.596351 containerd[1568]: time="2026-03-10T01:33:05.595264219Z" level=error msg="Failed to destroy network for sandbox \"fddec100b14d8da83ba4564431a0cc673c76a2895a58bf21af60742c81b3b559\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.603964 containerd[1568]: time="2026-03-10T01:33:05.603842327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bsdj4,Uid:19ece181-70dc-4566-932d-df7e48989fd7,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99e2120a0cf09fe4a46bc6e6c85bd7894d90016ba89b48b7d38c55f59653f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.604940 kubelet[2837]: E0310 01:33:05.604846 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99e2120a0cf09fe4a46bc6e6c85bd7894d90016ba89b48b7d38c55f59653f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.605008 kubelet[2837]: E0310 01:33:05.604959 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99e2120a0cf09fe4a46bc6e6c85bd7894d90016ba89b48b7d38c55f59653f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-bsdj4" Mar 10 01:33:05.605008 kubelet[2837]: E0310 01:33:05.604988 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d99e2120a0cf09fe4a46bc6e6c85bd7894d90016ba89b48b7d38c55f59653f98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-bsdj4" Mar 10 01:33:05.605161 kubelet[2837]: E0310 01:33:05.605047 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-bsdj4_kube-system(19ece181-70dc-4566-932d-df7e48989fd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-bsdj4_kube-system(19ece181-70dc-4566-932d-df7e48989fd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d99e2120a0cf09fe4a46bc6e6c85bd7894d90016ba89b48b7d38c55f59653f98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-bsdj4" podUID="19ece181-70dc-4566-932d-df7e48989fd7" Mar 10 01:33:05.606949 containerd[1568]: time="2026-03-10T01:33:05.606920747Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 10 01:33:05.607527 containerd[1568]: time="2026-03-10T01:33:05.607480351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-sb4dn,Uid:c11513cf-a76c-4fa1-a5ad-bd942108eb0e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddec100b14d8da83ba4564431a0cc673c76a2895a58bf21af60742c81b3b559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.607963 kubelet[2837]: E0310 01:33:05.607941 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddec100b14d8da83ba4564431a0cc673c76a2895a58bf21af60742c81b3b559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.608388 kubelet[2837]: E0310 01:33:05.608238 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddec100b14d8da83ba4564431a0cc673c76a2895a58bf21af60742c81b3b559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-sb4dn" Mar 10 01:33:05.609105 kubelet[2837]: E0310 01:33:05.608986 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fddec100b14d8da83ba4564431a0cc673c76a2895a58bf21af60742c81b3b559\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-sb4dn" Mar 10 01:33:05.610372 kubelet[2837]: E0310 01:33:05.610098 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-sb4dn_kube-system(c11513cf-a76c-4fa1-a5ad-bd942108eb0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-sb4dn_kube-system(c11513cf-a76c-4fa1-a5ad-bd942108eb0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fddec100b14d8da83ba4564431a0cc673c76a2895a58bf21af60742c81b3b559\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-sb4dn" podUID="c11513cf-a76c-4fa1-a5ad-bd942108eb0e" Mar 10 01:33:05.626761 containerd[1568]: time="2026-03-10T01:33:05.626532929Z" level=error msg="Failed to destroy network for sandbox \"912d29fc529665dcdc68476769dc0e4d19dce637ccafe28c0163bf16d5797d4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.637141 containerd[1568]: time="2026-03-10T01:33:05.633514778Z" level=error msg="Failed to destroy network for sandbox \"fca4cf9958bd09739d777994a7ad0f0b08bcac781d5d8a9e7b18e7533c95d964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.637373 containerd[1568]: time="2026-03-10T01:33:05.637298318Z" level=error msg="Failed to destroy network for sandbox \"7a24f4232eebbd7a584a624f0dc26cf06a390f3d083aa5522bab3ec38a4c661f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.637521 containerd[1568]: time="2026-03-10T01:33:05.635413972Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545475ff5b-79bsc,Uid:09f04862-7ea7-4cf7-9b9c-71c321b7fda5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"912d29fc529665dcdc68476769dc0e4d19dce637ccafe28c0163bf16d5797d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.638317 kubelet[2837]: E0310 01:33:05.638071 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912d29fc529665dcdc68476769dc0e4d19dce637ccafe28c0163bf16d5797d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.638317 kubelet[2837]: E0310 01:33:05.638149 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912d29fc529665dcdc68476769dc0e4d19dce637ccafe28c0163bf16d5797d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-545475ff5b-79bsc" Mar 10 01:33:05.638317 kubelet[2837]: E0310 01:33:05.638178 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"912d29fc529665dcdc68476769dc0e4d19dce637ccafe28c0163bf16d5797d4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-545475ff5b-79bsc" Mar 10 01:33:05.638476 kubelet[2837]: E0310 01:33:05.638245 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-545475ff5b-79bsc_calico-system(09f04862-7ea7-4cf7-9b9c-71c321b7fda5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-545475ff5b-79bsc_calico-system(09f04862-7ea7-4cf7-9b9c-71c321b7fda5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"912d29fc529665dcdc68476769dc0e4d19dce637ccafe28c0163bf16d5797d4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-545475ff5b-79bsc" podUID="09f04862-7ea7-4cf7-9b9c-71c321b7fda5" Mar 10 01:33:05.642705 containerd[1568]: time="2026-03-10T01:33:05.642329988Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-cdn75,Uid:8bb68f61-585f-4b44-94f1-afbdee8dd54f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a24f4232eebbd7a584a624f0dc26cf06a390f3d083aa5522bab3ec38a4c661f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.642990 kubelet[2837]: E0310 01:33:05.642920 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a24f4232eebbd7a584a624f0dc26cf06a390f3d083aa5522bab3ec38a4c661f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.642990 kubelet[2837]: E0310 01:33:05.642973 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a24f4232eebbd7a584a624f0dc26cf06a390f3d083aa5522bab3ec38a4c661f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7ff485cc5f-cdn75" Mar 10 01:33:05.643440 kubelet[2837]: E0310 01:33:05.642998 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a24f4232eebbd7a584a624f0dc26cf06a390f3d083aa5522bab3ec38a4c661f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7ff485cc5f-cdn75" Mar 10 01:33:05.643440 kubelet[2837]: E0310 01:33:05.643062 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ff485cc5f-cdn75_calico-system(8bb68f61-585f-4b44-94f1-afbdee8dd54f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ff485cc5f-cdn75_calico-system(8bb68f61-585f-4b44-94f1-afbdee8dd54f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a24f4232eebbd7a584a624f0dc26cf06a390f3d083aa5522bab3ec38a4c661f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7ff485cc5f-cdn75" podUID="8bb68f61-585f-4b44-94f1-afbdee8dd54f" Mar 10 01:33:05.644197 containerd[1568]: time="2026-03-10T01:33:05.644065312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-x8658,Uid:537d9f6c-6f13-4a20-aa3b-d04712aaf478,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4cf9958bd09739d777994a7ad0f0b08bcac781d5d8a9e7b18e7533c95d964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.644348 kubelet[2837]: E0310 01:33:05.644264 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4cf9958bd09739d777994a7ad0f0b08bcac781d5d8a9e7b18e7533c95d964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:05.644348 kubelet[2837]: E0310 01:33:05.644299 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4cf9958bd09739d777994a7ad0f0b08bcac781d5d8a9e7b18e7533c95d964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7ff485cc5f-x8658" Mar 10 01:33:05.644348 kubelet[2837]: E0310 01:33:05.644316 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fca4cf9958bd09739d777994a7ad0f0b08bcac781d5d8a9e7b18e7533c95d964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7ff485cc5f-x8658" Mar 10 01:33:05.644454 kubelet[2837]: E0310 01:33:05.644361 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7ff485cc5f-x8658_calico-system(537d9f6c-6f13-4a20-aa3b-d04712aaf478)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7ff485cc5f-x8658_calico-system(537d9f6c-6f13-4a20-aa3b-d04712aaf478)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fca4cf9958bd09739d777994a7ad0f0b08bcac781d5d8a9e7b18e7533c95d964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7ff485cc5f-x8658" podUID="537d9f6c-6f13-4a20-aa3b-d04712aaf478" Mar 10 01:33:05.650765 containerd[1568]: time="2026-03-10T01:33:05.650675816Z" level=info msg="Container c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:05.677709 containerd[1568]: time="2026-03-10T01:33:05.673709446Z" level=info msg="CreateContainer within sandbox \"cca989b8861e51ff93af916185ed9f172a1319938eec6bf486a32ab0c670c5bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e\"" Mar 10 01:33:05.677709 containerd[1568]: time="2026-03-10T01:33:05.676227210Z" level=info msg="StartContainer for \"c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e\"" Mar 10 01:33:05.686272 containerd[1568]: time="2026-03-10T01:33:05.683839043Z" level=info msg="connecting to shim c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e" address="unix:///run/containerd/s/ceda13777fbcc68ccaefe385caa712a6d7670f7d9117625b6c3ff13262f9d9b0" protocol=ttrpc version=3 Mar 10 01:33:05.736957 systemd[1]: Started cri-containerd-c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e.scope - libcontainer container c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e. Mar 10 01:33:05.921185 systemd[1]: Created slice kubepods-besteffort-pod6470ee27_1ae0_4c37_bfc5_73aa0f2ec825.slice - libcontainer container kubepods-besteffort-pod6470ee27_1ae0_4c37_bfc5_73aa0f2ec825.slice. Mar 10 01:33:05.941627 containerd[1568]: time="2026-03-10T01:33:05.940833449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xlrx5,Uid:6470ee27-1ae0-4c37-bfc5-73aa0f2ec825,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:05.952087 containerd[1568]: time="2026-03-10T01:33:05.951031265Z" level=info msg="StartContainer for \"c7fd0e376f32a2be3f5200eda6f14f417279e12812830989dc1ebb50d50e5d1e\" returns successfully" Mar 10 01:33:06.132290 containerd[1568]: time="2026-03-10T01:33:06.131971885Z" level=error msg="Failed to destroy network for sandbox \"1e53f481a5604d6c260293c52098af77896ca38064dfacaed5ac761c12444570\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:06.144369 containerd[1568]: time="2026-03-10T01:33:06.144249466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xlrx5,Uid:6470ee27-1ae0-4c37-bfc5-73aa0f2ec825,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e53f481a5604d6c260293c52098af77896ca38064dfacaed5ac761c12444570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:06.146926 kubelet[2837]: E0310 01:33:06.146697 2837 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e53f481a5604d6c260293c52098af77896ca38064dfacaed5ac761c12444570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 10 01:33:06.147338 kubelet[2837]: E0310 01:33:06.146949 2837 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e53f481a5604d6c260293c52098af77896ca38064dfacaed5ac761c12444570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:33:06.147338 kubelet[2837]: E0310 01:33:06.146980 2837 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e53f481a5604d6c260293c52098af77896ca38064dfacaed5ac761c12444570\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xlrx5" Mar 10 01:33:06.147338 kubelet[2837]: E0310 01:33:06.147054 2837 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xlrx5_calico-system(6470ee27-1ae0-4c37-bfc5-73aa0f2ec825)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xlrx5_calico-system(6470ee27-1ae0-4c37-bfc5-73aa0f2ec825)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e53f481a5604d6c260293c52098af77896ca38064dfacaed5ac761c12444570\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xlrx5" podUID="6470ee27-1ae0-4c37-bfc5-73aa0f2ec825" Mar 10 01:33:06.423507 systemd[1]: run-netns-cni\x2dba31710b\x2de986\x2d7f38\x2d848a\x2d86e4ce2fe4d9.mount: Deactivated successfully. Mar 10 01:33:06.423954 systemd[1]: run-netns-cni\x2d0e92ec85\x2d6929\x2d4a56\x2da6ca\x2d4891db752129.mount: Deactivated successfully. Mar 10 01:33:06.424052 systemd[1]: run-netns-cni\x2d99160756\x2d7b6c\x2db40c\x2d6604\x2d14245ca027e4.mount: Deactivated successfully. Mar 10 01:33:06.424391 systemd[1]: run-netns-cni\x2d83f528a0\x2d1f45\x2dc51c\x2d92da\x2d4e9a45406a4c.mount: Deactivated successfully. Mar 10 01:33:06.424532 systemd[1]: run-netns-cni\x2d6368e4e2\x2da7da\x2df6ca\x2d767c\x2dd390ecd3b7dd.mount: Deactivated successfully. Mar 10 01:33:06.596498 kubelet[2837]: I0310 01:33:06.594526 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-w755j" podStartSLOduration=2.23496826 podStartE2EDuration="39.594511774s" podCreationTimestamp="2026-03-10 01:32:27 +0000 UTC" firstStartedPulling="2026-03-10 01:32:28.182733983 +0000 UTC m=+23.577848720" lastFinishedPulling="2026-03-10 01:33:05.542277496 +0000 UTC m=+60.937392234" observedRunningTime="2026-03-10 01:33:06.571444207 +0000 UTC m=+61.966558945" watchObservedRunningTime="2026-03-10 01:33:06.594511774 +0000 UTC m=+61.989626511" Mar 10 01:33:06.944392 kubelet[2837]: I0310 01:33:06.943054 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-ca-bundle\") pod \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " Mar 10 01:33:06.944392 kubelet[2837]: I0310 01:33:06.943360 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-backend-key-pair\") pod \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " Mar 10 01:33:06.944392 kubelet[2837]: I0310 01:33:06.943712 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-nginx-config\" (UniqueName: \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-nginx-config\") pod \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " Mar 10 01:33:06.944392 kubelet[2837]: I0310 01:33:06.944046 2837 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-kube-api-access-j6hnn\" (UniqueName: \"kubernetes.io/projected/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-kube-api-access-j6hnn\") pod \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\" (UID: \"81237aa7-ecdd-4b1d-813b-c25c2056b4e3\") " Mar 10 01:33:06.949073 kubelet[2837]: I0310 01:33:06.947500 2837 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-ca-bundle" pod "81237aa7-ecdd-4b1d-813b-c25c2056b4e3" (UID: "81237aa7-ecdd-4b1d-813b-c25c2056b4e3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:33:06.951017 kubelet[2837]: I0310 01:33:06.950157 2837 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-nginx-config" pod "81237aa7-ecdd-4b1d-813b-c25c2056b4e3" (UID: "81237aa7-ecdd-4b1d-813b-c25c2056b4e3"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 10 01:33:06.967711 kubelet[2837]: I0310 01:33:06.967452 2837 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-kube-api-access-j6hnn" pod "81237aa7-ecdd-4b1d-813b-c25c2056b4e3" (UID: "81237aa7-ecdd-4b1d-813b-c25c2056b4e3"). InnerVolumeSpecName "kube-api-access-j6hnn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 10 01:33:06.967496 systemd[1]: var-lib-kubelet-pods-81237aa7\x2decdd\x2d4b1d\x2d813b\x2dc25c2056b4e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj6hnn.mount: Deactivated successfully. Mar 10 01:33:06.967923 systemd[1]: var-lib-kubelet-pods-81237aa7\x2decdd\x2d4b1d\x2d813b\x2dc25c2056b4e3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 10 01:33:06.969032 kubelet[2837]: I0310 01:33:06.969007 2837 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-backend-key-pair" pod "81237aa7-ecdd-4b1d-813b-c25c2056b4e3" (UID: "81237aa7-ecdd-4b1d-813b-c25c2056b4e3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 10 01:33:07.048154 kubelet[2837]: I0310 01:33:07.045265 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Mar 10 01:33:07.048154 kubelet[2837]: I0310 01:33:07.045315 2837 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Mar 10 01:33:07.048154 kubelet[2837]: I0310 01:33:07.045328 2837 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-nginx-config\") on node \"localhost\" DevicePath \"\"" Mar 10 01:33:07.048154 kubelet[2837]: I0310 01:33:07.045338 2837 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6hnn\" (UniqueName: \"kubernetes.io/projected/81237aa7-ecdd-4b1d-813b-c25c2056b4e3-kube-api-access-j6hnn\") on node \"localhost\" DevicePath \"\"" Mar 10 01:33:07.537011 systemd[1]: Removed slice kubepods-besteffort-pod81237aa7_ecdd_4b1d_813b_c25c2056b4e3.slice - libcontainer container kubepods-besteffort-pod81237aa7_ecdd_4b1d_813b_c25c2056b4e3.slice. Mar 10 01:33:07.714748 systemd[1]: Created slice kubepods-besteffort-podef041864_d469_4993_89cf_82efd31fd686.slice - libcontainer container kubepods-besteffort-podef041864_d469_4993_89cf_82efd31fd686.slice. Mar 10 01:33:07.755443 kubelet[2837]: I0310 01:33:07.755399 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef041864-d469-4993-89cf-82efd31fd686-whisker-ca-bundle\") pod \"whisker-59b448cd7c-frmwc\" (UID: \"ef041864-d469-4993-89cf-82efd31fd686\") " pod="calico-system/whisker-59b448cd7c-frmwc" Mar 10 01:33:07.757253 kubelet[2837]: I0310 01:33:07.757103 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2c72\" (UniqueName: \"kubernetes.io/projected/ef041864-d469-4993-89cf-82efd31fd686-kube-api-access-n2c72\") pod \"whisker-59b448cd7c-frmwc\" (UID: \"ef041864-d469-4993-89cf-82efd31fd686\") " pod="calico-system/whisker-59b448cd7c-frmwc" Mar 10 01:33:07.757253 kubelet[2837]: I0310 01:33:07.757142 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef041864-d469-4993-89cf-82efd31fd686-whisker-backend-key-pair\") pod \"whisker-59b448cd7c-frmwc\" (UID: \"ef041864-d469-4993-89cf-82efd31fd686\") " pod="calico-system/whisker-59b448cd7c-frmwc" Mar 10 01:33:07.757253 kubelet[2837]: I0310 01:33:07.757170 2837 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ef041864-d469-4993-89cf-82efd31fd686-nginx-config\") pod \"whisker-59b448cd7c-frmwc\" (UID: \"ef041864-d469-4993-89cf-82efd31fd686\") " pod="calico-system/whisker-59b448cd7c-frmwc" Mar 10 01:33:08.032257 containerd[1568]: time="2026-03-10T01:33:08.031950329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b448cd7c-frmwc,Uid:ef041864-d469-4993-89cf-82efd31fd686,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:08.555398 systemd-networkd[1474]: cali0816451379f: Link UP Mar 10 01:33:08.558308 systemd-networkd[1474]: cali0816451379f: Gained carrier Mar 10 01:33:08.627193 containerd[1568]: 2026-03-10 01:33:08.108 [ERROR][4037] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 10 01:33:08.627193 containerd[1568]: 2026-03-10 01:33:08.224 [INFO][4037] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--59b448cd7c--frmwc-eth0 whisker-59b448cd7c- calico-system ef041864-d469-4993-89cf-82efd31fd686 1013 0 2026-03-10 01:33:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:59b448cd7c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-59b448cd7c-frmwc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0816451379f [] [] }} ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-" Mar 10 01:33:08.627193 containerd[1568]: 2026-03-10 01:33:08.224 [INFO][4037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.627193 containerd[1568]: 2026-03-10 01:33:08.322 [INFO][4051] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" HandleID="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Workload="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.347 [INFO][4051] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" HandleID="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Workload="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000521f60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-59b448cd7c-frmwc", "timestamp":"2026-03-10 01:33:08.322110895 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000198580)} Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.347 [INFO][4051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.347 [INFO][4051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.347 [INFO][4051] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.362 [INFO][4051] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" host="localhost" Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.394 [INFO][4051] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.409 [INFO][4051] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.429 [INFO][4051] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.442 [INFO][4051] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:08.627686 containerd[1568]: 2026-03-10 01:33:08.442 [INFO][4051] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" host="localhost" Mar 10 01:33:08.628173 containerd[1568]: 2026-03-10 01:33:08.449 [INFO][4051] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12 Mar 10 01:33:08.628173 containerd[1568]: 2026-03-10 01:33:08.459 [INFO][4051] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" host="localhost" Mar 10 01:33:08.628173 containerd[1568]: 2026-03-10 01:33:08.491 [INFO][4051] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" host="localhost" Mar 10 01:33:08.628173 containerd[1568]: 2026-03-10 01:33:08.491 [INFO][4051] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" host="localhost" Mar 10 01:33:08.628173 containerd[1568]: 2026-03-10 01:33:08.491 [INFO][4051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:08.628173 containerd[1568]: 2026-03-10 01:33:08.491 [INFO][4051] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" HandleID="k8s-pod-network.718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Workload="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.628357 containerd[1568]: 2026-03-10 01:33:08.499 [INFO][4037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59b448cd7c--frmwc-eth0", GenerateName:"whisker-59b448cd7c-", Namespace:"calico-system", SelfLink:"", UID:"ef041864-d469-4993-89cf-82efd31fd686", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59b448cd7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-59b448cd7c-frmwc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0816451379f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:08.628357 containerd[1568]: 2026-03-10 01:33:08.499 [INFO][4037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.628558 containerd[1568]: 2026-03-10 01:33:08.500 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0816451379f ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.628558 containerd[1568]: 2026-03-10 01:33:08.562 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.628756 containerd[1568]: 2026-03-10 01:33:08.571 [INFO][4037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--59b448cd7c--frmwc-eth0", GenerateName:"whisker-59b448cd7c-", Namespace:"calico-system", SelfLink:"", UID:"ef041864-d469-4993-89cf-82efd31fd686", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 33, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"59b448cd7c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12", Pod:"whisker-59b448cd7c-frmwc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0816451379f", MAC:"f2:a5:f9:f0:6c:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:08.628931 containerd[1568]: 2026-03-10 01:33:08.608 [INFO][4037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" Namespace="calico-system" Pod="whisker-59b448cd7c-frmwc" WorkloadEndpoint="localhost-k8s-whisker--59b448cd7c--frmwc-eth0" Mar 10 01:33:08.838286 containerd[1568]: time="2026-03-10T01:33:08.837966303Z" level=info msg="connecting to shim 718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12" address="unix:///run/containerd/s/1a8ddaf50b827afa6fe56298131bdc9687630f0c44895ba9b8443d707fa99ff1" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:08.923042 systemd[1]: Started cri-containerd-718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12.scope - libcontainer container 718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12. Mar 10 01:33:08.924808 kubelet[2837]: I0310 01:33:08.923328 2837 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81237aa7-ecdd-4b1d-813b-c25c2056b4e3" path="/var/lib/kubelet/pods/81237aa7-ecdd-4b1d-813b-c25c2056b4e3/volumes" Mar 10 01:33:08.977745 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:09.126493 containerd[1568]: time="2026-03-10T01:33:09.126290943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-59b448cd7c-frmwc,Uid:ef041864-d469-4993-89cf-82efd31fd686,Namespace:calico-system,Attempt:0,} returns sandbox id \"718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12\"" Mar 10 01:33:09.138740 containerd[1568]: time="2026-03-10T01:33:09.138537717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 10 01:33:09.694904 systemd-networkd[1474]: cali0816451379f: Gained IPv6LL Mar 10 01:33:10.450055 containerd[1568]: time="2026-03-10T01:33:10.449725724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:10.453060 containerd[1568]: time="2026-03-10T01:33:10.452983341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 10 01:33:10.473353 containerd[1568]: time="2026-03-10T01:33:10.473304218Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:10.495008 containerd[1568]: time="2026-03-10T01:33:10.494934156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:10.495763 containerd[1568]: time="2026-03-10T01:33:10.495552423Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.356836795s" Mar 10 01:33:10.495763 containerd[1568]: time="2026-03-10T01:33:10.495651286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 10 01:33:10.516381 containerd[1568]: time="2026-03-10T01:33:10.514928296Z" level=info msg="CreateContainer within sandbox \"718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 10 01:33:10.546400 containerd[1568]: time="2026-03-10T01:33:10.546303569Z" level=info msg="Container 815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:10.556384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102082966.mount: Deactivated successfully. Mar 10 01:33:10.606009 containerd[1568]: time="2026-03-10T01:33:10.605693379Z" level=info msg="CreateContainer within sandbox \"718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863\"" Mar 10 01:33:10.610187 containerd[1568]: time="2026-03-10T01:33:10.609968743Z" level=info msg="StartContainer for \"815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863\"" Mar 10 01:33:10.612347 containerd[1568]: time="2026-03-10T01:33:10.612294748Z" level=info msg="connecting to shim 815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863" address="unix:///run/containerd/s/1a8ddaf50b827afa6fe56298131bdc9687630f0c44895ba9b8443d707fa99ff1" protocol=ttrpc version=3 Mar 10 01:33:10.656220 systemd-networkd[1474]: vxlan.calico: Link UP Mar 10 01:33:10.656230 systemd-networkd[1474]: vxlan.calico: Gained carrier Mar 10 01:33:10.669221 systemd[1]: Started cri-containerd-815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863.scope - libcontainer container 815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863. Mar 10 01:33:10.861268 containerd[1568]: time="2026-03-10T01:33:10.860713731Z" level=info msg="StartContainer for \"815741f353b858e97aac94c22fa9ce5af0ad06d9cd757e99480efbcc8e28c863\" returns successfully" Mar 10 01:33:10.864705 containerd[1568]: time="2026-03-10T01:33:10.864516543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 10 01:33:12.446395 systemd-networkd[1474]: vxlan.calico: Gained IPv6LL Mar 10 01:33:13.064396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590733578.mount: Deactivated successfully. Mar 10 01:33:13.143690 containerd[1568]: time="2026-03-10T01:33:13.143460961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:13.147050 containerd[1568]: time="2026-03-10T01:33:13.146256878Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 10 01:33:13.153046 containerd[1568]: time="2026-03-10T01:33:13.152548429Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:13.162557 containerd[1568]: time="2026-03-10T01:33:13.160091820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:13.162557 containerd[1568]: time="2026-03-10T01:33:13.161137105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.296584824s" Mar 10 01:33:13.162557 containerd[1568]: time="2026-03-10T01:33:13.161182720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 10 01:33:13.183997 containerd[1568]: time="2026-03-10T01:33:13.183173230Z" level=info msg="CreateContainer within sandbox \"718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 10 01:33:13.241734 containerd[1568]: time="2026-03-10T01:33:13.237760197Z" level=info msg="Container b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:13.268543 containerd[1568]: time="2026-03-10T01:33:13.268403004Z" level=info msg="CreateContainer within sandbox \"718299bcae19cdf269f072ab5e6c887cacce7c3fce3075c62d17c847fb107c12\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b\"" Mar 10 01:33:13.284791 containerd[1568]: time="2026-03-10T01:33:13.281812839Z" level=info msg="StartContainer for \"b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b\"" Mar 10 01:33:13.284791 containerd[1568]: time="2026-03-10T01:33:13.283981672Z" level=info msg="connecting to shim b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b" address="unix:///run/containerd/s/1a8ddaf50b827afa6fe56298131bdc9687630f0c44895ba9b8443d707fa99ff1" protocol=ttrpc version=3 Mar 10 01:33:13.359258 systemd[1]: Started cri-containerd-b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b.scope - libcontainer container b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b. Mar 10 01:33:13.595176 containerd[1568]: time="2026-03-10T01:33:13.595082861Z" level=info msg="StartContainer for \"b20bf56514f242f851daa005956b2b1e459099709f2c7dd8c736ec4002ee648b\" returns successfully" Mar 10 01:33:17.924681 containerd[1568]: time="2026-03-10T01:33:17.922421281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-x8658,Uid:537d9f6c-6f13-4a20-aa3b-d04712aaf478,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:17.929409 containerd[1568]: time="2026-03-10T01:33:17.929090913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b8tsw,Uid:975ff6bc-9d37-4c8b-a404-eec5837ce86d,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:17.939454 containerd[1568]: time="2026-03-10T01:33:17.937080709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xlrx5,Uid:6470ee27-1ae0-4c37-bfc5-73aa0f2ec825,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:18.487319 systemd-networkd[1474]: calidc1c914f65d: Link UP Mar 10 01:33:18.497270 systemd-networkd[1474]: calidc1c914f65d: Gained carrier Mar 10 01:33:18.534381 kubelet[2837]: I0310 01:33:18.534274 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-59b448cd7c-frmwc" podStartSLOduration=7.5075816920000005 podStartE2EDuration="11.534253592s" podCreationTimestamp="2026-03-10 01:33:07 +0000 UTC" firstStartedPulling="2026-03-10 01:33:09.136388871 +0000 UTC m=+64.531503608" lastFinishedPulling="2026-03-10 01:33:13.163060771 +0000 UTC m=+68.558175508" observedRunningTime="2026-03-10 01:33:13.705265068 +0000 UTC m=+69.100379845" watchObservedRunningTime="2026-03-10 01:33:18.534253592 +0000 UTC m=+73.929368330" Mar 10 01:33:18.541897 containerd[1568]: 2026-03-10 01:33:18.099 [INFO][4450] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0 calico-apiserver-7ff485cc5f- calico-system 537d9f6c-6f13-4a20-aa3b-d04712aaf478 957 0 2026-03-10 01:32:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ff485cc5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7ff485cc5f-x8658 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calidc1c914f65d [] [] }} ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-" Mar 10 01:33:18.541897 containerd[1568]: 2026-03-10 01:33:18.100 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.541897 containerd[1568]: 2026-03-10 01:33:18.315 [INFO][4489] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" HandleID="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Workload="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.337 [INFO][4489] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" HandleID="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Workload="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000690260), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7ff485cc5f-x8658", "timestamp":"2026-03-10 01:33:18.315273879 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e2000)} Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.337 [INFO][4489] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.338 [INFO][4489] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.338 [INFO][4489] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.358 [INFO][4489] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" host="localhost" Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.395 [INFO][4489] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.411 [INFO][4489] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.416 [INFO][4489] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.423 [INFO][4489] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:18.542184 containerd[1568]: 2026-03-10 01:33:18.423 [INFO][4489] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" host="localhost" Mar 10 01:33:18.542736 containerd[1568]: 2026-03-10 01:33:18.428 [INFO][4489] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e Mar 10 01:33:18.542736 containerd[1568]: 2026-03-10 01:33:18.439 [INFO][4489] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" host="localhost" Mar 10 01:33:18.542736 containerd[1568]: 2026-03-10 01:33:18.457 [INFO][4489] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" host="localhost" Mar 10 01:33:18.542736 containerd[1568]: 2026-03-10 01:33:18.458 [INFO][4489] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" host="localhost" Mar 10 01:33:18.542736 containerd[1568]: 2026-03-10 01:33:18.458 [INFO][4489] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:18.542736 containerd[1568]: 2026-03-10 01:33:18.458 [INFO][4489] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" HandleID="k8s-pod-network.dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Workload="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.543739 containerd[1568]: 2026-03-10 01:33:18.468 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0", GenerateName:"calico-apiserver-7ff485cc5f-", Namespace:"calico-system", SelfLink:"", UID:"537d9f6c-6f13-4a20-aa3b-d04712aaf478", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff485cc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7ff485cc5f-x8658", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidc1c914f65d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:18.543935 containerd[1568]: 2026-03-10 01:33:18.471 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.543935 containerd[1568]: 2026-03-10 01:33:18.471 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc1c914f65d ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.543935 containerd[1568]: 2026-03-10 01:33:18.500 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.544037 containerd[1568]: 2026-03-10 01:33:18.503 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0", GenerateName:"calico-apiserver-7ff485cc5f-", Namespace:"calico-system", SelfLink:"", UID:"537d9f6c-6f13-4a20-aa3b-d04712aaf478", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff485cc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e", Pod:"calico-apiserver-7ff485cc5f-x8658", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calidc1c914f65d", MAC:"ee:16:00:61:68:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:18.544189 containerd[1568]: 2026-03-10 01:33:18.530 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-x8658" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--x8658-eth0" Mar 10 01:33:18.645998 systemd-networkd[1474]: calif68f12a40d1: Link UP Mar 10 01:33:18.648922 containerd[1568]: time="2026-03-10T01:33:18.648825853Z" level=info msg="connecting to shim dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e" address="unix:///run/containerd/s/36ba8dcf107cf2c4a510e040e35eee7754b9531011f14f6ca76a3c1b7a83b3be" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:18.650712 systemd-networkd[1474]: calif68f12a40d1: Gained carrier Mar 10 01:33:18.728661 containerd[1568]: 2026-03-10 01:33:18.097 [INFO][4451] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xlrx5-eth0 csi-node-driver- calico-system 6470ee27-1ae0-4c37-bfc5-73aa0f2ec825 775 0 2026-03-10 01:32:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xlrx5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif68f12a40d1 [] [] }} ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-" Mar 10 01:33:18.728661 containerd[1568]: 2026-03-10 01:33:18.099 [INFO][4451] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.728661 containerd[1568]: 2026-03-10 01:33:18.342 [INFO][4487] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" HandleID="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Workload="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.392 [INFO][4487] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" HandleID="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Workload="localhost-k8s-csi--node--driver--xlrx5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f290), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xlrx5", "timestamp":"2026-03-10 01:33:18.342471633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000450580)} Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.392 [INFO][4487] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.458 [INFO][4487] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.459 [INFO][4487] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.479 [INFO][4487] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" host="localhost" Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.501 [INFO][4487] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.540 [INFO][4487] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.551 [INFO][4487] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.559 [INFO][4487] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:18.729030 containerd[1568]: 2026-03-10 01:33:18.559 [INFO][4487] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" host="localhost" Mar 10 01:33:18.729399 containerd[1568]: 2026-03-10 01:33:18.569 [INFO][4487] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48 Mar 10 01:33:18.729399 containerd[1568]: 2026-03-10 01:33:18.593 [INFO][4487] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" host="localhost" Mar 10 01:33:18.729399 containerd[1568]: 2026-03-10 01:33:18.615 [INFO][4487] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" host="localhost" Mar 10 01:33:18.729399 containerd[1568]: 2026-03-10 01:33:18.619 [INFO][4487] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" host="localhost" Mar 10 01:33:18.729399 containerd[1568]: 2026-03-10 01:33:18.619 [INFO][4487] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:18.729399 containerd[1568]: 2026-03-10 01:33:18.619 [INFO][4487] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" HandleID="k8s-pod-network.86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Workload="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.729929 containerd[1568]: 2026-03-10 01:33:18.637 [INFO][4451] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xlrx5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xlrx5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif68f12a40d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:18.730072 containerd[1568]: 2026-03-10 01:33:18.638 [INFO][4451] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.730072 containerd[1568]: 2026-03-10 01:33:18.638 [INFO][4451] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif68f12a40d1 ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.730072 containerd[1568]: 2026-03-10 01:33:18.654 [INFO][4451] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.730393 containerd[1568]: 2026-03-10 01:33:18.656 [INFO][4451] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xlrx5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6470ee27-1ae0-4c37-bfc5-73aa0f2ec825", ResourceVersion:"775", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48", Pod:"csi-node-driver-xlrx5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif68f12a40d1", MAC:"c2:37:42:77:1b:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:18.731536 containerd[1568]: 2026-03-10 01:33:18.721 [INFO][4451] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" Namespace="calico-system" Pod="csi-node-driver-xlrx5" WorkloadEndpoint="localhost-k8s-csi--node--driver--xlrx5-eth0" Mar 10 01:33:18.760383 systemd[1]: Started cri-containerd-dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e.scope - libcontainer container dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e. Mar 10 01:33:18.844329 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:18.880759 containerd[1568]: time="2026-03-10T01:33:18.878686928Z" level=info msg="connecting to shim 86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48" address="unix:///run/containerd/s/d781966961fb082d098978aaf712307bf0e065f425ade82aeb3279203e4177e3" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:18.884758 systemd-networkd[1474]: cali1e4ac481d03: Link UP Mar 10 01:33:18.893979 systemd-networkd[1474]: cali1e4ac481d03: Gained carrier Mar 10 01:33:18.923159 containerd[1568]: time="2026-03-10T01:33:18.923110871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545475ff5b-79bsc,Uid:09f04862-7ea7-4cf7-9b9c-71c321b7fda5,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:18.970668 containerd[1568]: 2026-03-10 01:33:18.126 [INFO][4444] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0 goldmane-9f7667bb8- calico-system 975ff6bc-9d37-4c8b-a404-eec5837ce86d 956 0 2026-03-10 01:32:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-b8tsw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1e4ac481d03 [] [] }} ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-" Mar 10 01:33:18.970668 containerd[1568]: 2026-03-10 01:33:18.132 [INFO][4444] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:18.970668 containerd[1568]: 2026-03-10 01:33:18.339 [INFO][4499] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" HandleID="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Workload="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.394 [INFO][4499] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" HandleID="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Workload="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4cd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-b8tsw", "timestamp":"2026-03-10 01:33:18.339505197 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006b82c0)} Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.394 [INFO][4499] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.620 [INFO][4499] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.620 [INFO][4499] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.632 [INFO][4499] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" host="localhost" Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.657 [INFO][4499] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.689 [INFO][4499] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.704 [INFO][4499] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.731 [INFO][4499] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:18.971360 containerd[1568]: 2026-03-10 01:33:18.732 [INFO][4499] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" host="localhost" Mar 10 01:33:18.974772 containerd[1568]: 2026-03-10 01:33:18.785 [INFO][4499] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f Mar 10 01:33:18.974772 containerd[1568]: 2026-03-10 01:33:18.820 [INFO][4499] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" host="localhost" Mar 10 01:33:18.974772 containerd[1568]: 2026-03-10 01:33:18.853 [INFO][4499] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" host="localhost" Mar 10 01:33:18.974772 containerd[1568]: 2026-03-10 01:33:18.854 [INFO][4499] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" host="localhost" Mar 10 01:33:18.974772 containerd[1568]: 2026-03-10 01:33:18.855 [INFO][4499] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:18.974772 containerd[1568]: 2026-03-10 01:33:18.856 [INFO][4499] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" HandleID="k8s-pod-network.09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Workload="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:18.975019 containerd[1568]: 2026-03-10 01:33:18.868 [INFO][4444] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"975ff6bc-9d37-4c8b-a404-eec5837ce86d", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-b8tsw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e4ac481d03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:18.975019 containerd[1568]: 2026-03-10 01:33:18.869 [INFO][4444] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:18.975200 containerd[1568]: 2026-03-10 01:33:18.869 [INFO][4444] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e4ac481d03 ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:18.975200 containerd[1568]: 2026-03-10 01:33:18.899 [INFO][4444] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:18.975267 containerd[1568]: 2026-03-10 01:33:18.902 [INFO][4444] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"975ff6bc-9d37-4c8b-a404-eec5837ce86d", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f", Pod:"goldmane-9f7667bb8-b8tsw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1e4ac481d03", MAC:"f6:43:7c:d2:2f:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:18.975409 containerd[1568]: 2026-03-10 01:33:18.945 [INFO][4444] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" Namespace="calico-system" Pod="goldmane-9f7667bb8-b8tsw" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--b8tsw-eth0" Mar 10 01:33:19.020966 containerd[1568]: time="2026-03-10T01:33:19.018994425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-x8658,Uid:537d9f6c-6f13-4a20-aa3b-d04712aaf478,Namespace:calico-system,Attempt:0,} returns sandbox id \"dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e\"" Mar 10 01:33:19.033546 containerd[1568]: time="2026-03-10T01:33:19.033480480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 01:33:19.055801 systemd[1]: Started cri-containerd-86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48.scope - libcontainer container 86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48. Mar 10 01:33:19.104920 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:19.171491 containerd[1568]: time="2026-03-10T01:33:19.171109172Z" level=info msg="connecting to shim 09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f" address="unix:///run/containerd/s/bdbbbd8463149bb06d5ff0bfe5cba9d233d6b20a8e9a643814e0a652eab41826" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:19.240469 containerd[1568]: time="2026-03-10T01:33:19.240089834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xlrx5,Uid:6470ee27-1ae0-4c37-bfc5-73aa0f2ec825,Namespace:calico-system,Attempt:0,} returns sandbox id \"86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48\"" Mar 10 01:33:19.282928 systemd[1]: Started cri-containerd-09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f.scope - libcontainer container 09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f. Mar 10 01:33:19.348071 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:19.502684 containerd[1568]: time="2026-03-10T01:33:19.502461152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-b8tsw,Uid:975ff6bc-9d37-4c8b-a404-eec5837ce86d,Namespace:calico-system,Attempt:0,} returns sandbox id \"09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f\"" Mar 10 01:33:19.621823 systemd-networkd[1474]: calia81f4f9fc1e: Link UP Mar 10 01:33:19.634997 systemd-networkd[1474]: calia81f4f9fc1e: Gained carrier Mar 10 01:33:19.683749 containerd[1568]: 2026-03-10 01:33:19.261 [INFO][4610] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0 calico-kube-controllers-545475ff5b- calico-system 09f04862-7ea7-4cf7-9b9c-71c321b7fda5 947 0 2026-03-10 01:32:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:545475ff5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-545475ff5b-79bsc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia81f4f9fc1e [] [] }} ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-" Mar 10 01:33:19.683749 containerd[1568]: 2026-03-10 01:33:19.267 [INFO][4610] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.683749 containerd[1568]: 2026-03-10 01:33:19.447 [INFO][4708] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" HandleID="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Workload="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.497 [INFO][4708] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" HandleID="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Workload="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-545475ff5b-79bsc", "timestamp":"2026-03-10 01:33:19.447912308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f2c60)} Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.497 [INFO][4708] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.497 [INFO][4708] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.497 [INFO][4708] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.513 [INFO][4708] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" host="localhost" Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.533 [INFO][4708] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.551 [INFO][4708] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.558 [INFO][4708] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:19.684083 containerd[1568]: 2026-03-10 01:33:19.564 [INFO][4708] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.564 [INFO][4708] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" host="localhost" Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.577 [INFO][4708] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.587 [INFO][4708] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" host="localhost" Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.602 [INFO][4708] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" host="localhost" Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.602 [INFO][4708] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" host="localhost" Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.602 [INFO][4708] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:19.686276 containerd[1568]: 2026-03-10 01:33:19.602 [INFO][4708] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" HandleID="k8s-pod-network.746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Workload="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.686467 containerd[1568]: 2026-03-10 01:33:19.609 [INFO][4610] cni-plugin/k8s.go 418: Populated endpoint ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0", GenerateName:"calico-kube-controllers-545475ff5b-", Namespace:"calico-system", SelfLink:"", UID:"09f04862-7ea7-4cf7-9b9c-71c321b7fda5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545475ff5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-545475ff5b-79bsc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia81f4f9fc1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:19.687493 containerd[1568]: 2026-03-10 01:33:19.609 [INFO][4610] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.687493 containerd[1568]: 2026-03-10 01:33:19.610 [INFO][4610] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia81f4f9fc1e ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.687493 containerd[1568]: 2026-03-10 01:33:19.633 [INFO][4610] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.687729 containerd[1568]: 2026-03-10 01:33:19.636 [INFO][4610] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0", GenerateName:"calico-kube-controllers-545475ff5b-", Namespace:"calico-system", SelfLink:"", UID:"09f04862-7ea7-4cf7-9b9c-71c321b7fda5", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"545475ff5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a", Pod:"calico-kube-controllers-545475ff5b-79bsc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia81f4f9fc1e", MAC:"32:0c:3d:6e:a6:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:19.688159 containerd[1568]: 2026-03-10 01:33:19.673 [INFO][4610] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" Namespace="calico-system" Pod="calico-kube-controllers-545475ff5b-79bsc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--545475ff5b--79bsc-eth0" Mar 10 01:33:19.812554 containerd[1568]: time="2026-03-10T01:33:19.812388121Z" level=info msg="connecting to shim 746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a" address="unix:///run/containerd/s/341449a4ef45f84853bceec89374c42ae1c50528b6ac4de9301e69a40c28990c" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:19.872378 systemd[1]: Started cri-containerd-746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a.scope - libcontainer container 746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a. Mar 10 01:33:19.904523 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:19.923258 containerd[1568]: time="2026-03-10T01:33:19.922772436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-cdn75,Uid:8bb68f61-585f-4b44-94f1-afbdee8dd54f,Namespace:calico-system,Attempt:0,}" Mar 10 01:33:19.926523 kubelet[2837]: E0310 01:33:19.925548 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:19.928642 containerd[1568]: time="2026-03-10T01:33:19.927988302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bsdj4,Uid:19ece181-70dc-4566-932d-df7e48989fd7,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:20.015469 containerd[1568]: time="2026-03-10T01:33:20.015327075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-545475ff5b-79bsc,Uid:09f04862-7ea7-4cf7-9b9c-71c321b7fda5,Namespace:calico-system,Attempt:0,} returns sandbox id \"746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a\"" Mar 10 01:33:20.191651 systemd-networkd[1474]: cali1e4ac481d03: Gained IPv6LL Mar 10 01:33:20.515507 systemd-networkd[1474]: calidc1c914f65d: Gained IPv6LL Mar 10 01:33:20.637768 systemd-networkd[1474]: cali6f37c81cc1b: Link UP Mar 10 01:33:20.638069 systemd-networkd[1474]: calif68f12a40d1: Gained IPv6LL Mar 10 01:33:20.639686 systemd-networkd[1474]: cali6f37c81cc1b: Gained carrier Mar 10 01:33:20.691406 containerd[1568]: 2026-03-10 01:33:20.113 [INFO][4786] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--bsdj4-eth0 coredns-7d764666f9- kube-system 19ece181-70dc-4566-932d-df7e48989fd7 954 0 2026-03-10 01:32:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-bsdj4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f37c81cc1b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-" Mar 10 01:33:20.691406 containerd[1568]: 2026-03-10 01:33:20.115 [INFO][4786] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.691406 containerd[1568]: 2026-03-10 01:33:20.225 [INFO][4816] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" HandleID="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Workload="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.247 [INFO][4816] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" HandleID="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Workload="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123630), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-bsdj4", "timestamp":"2026-03-10 01:33:20.225981699 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007026e0)} Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.247 [INFO][4816] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.247 [INFO][4816] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.247 [INFO][4816] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.258 [INFO][4816] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" host="localhost" Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.293 [INFO][4816] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.313 [INFO][4816] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.465 [INFO][4816] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.557 [INFO][4816] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:20.692146 containerd[1568]: 2026-03-10 01:33:20.558 [INFO][4816] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" host="localhost" Mar 10 01:33:20.696042 containerd[1568]: 2026-03-10 01:33:20.563 [INFO][4816] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf Mar 10 01:33:20.696042 containerd[1568]: 2026-03-10 01:33:20.585 [INFO][4816] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" host="localhost" Mar 10 01:33:20.696042 containerd[1568]: 2026-03-10 01:33:20.619 [INFO][4816] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" host="localhost" Mar 10 01:33:20.696042 containerd[1568]: 2026-03-10 01:33:20.619 [INFO][4816] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" host="localhost" Mar 10 01:33:20.696042 containerd[1568]: 2026-03-10 01:33:20.619 [INFO][4816] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:20.696042 containerd[1568]: 2026-03-10 01:33:20.619 [INFO][4816] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" HandleID="k8s-pod-network.ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Workload="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.696345 containerd[1568]: 2026-03-10 01:33:20.625 [INFO][4786] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--bsdj4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"19ece181-70dc-4566-932d-df7e48989fd7", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-bsdj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f37c81cc1b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:20.696345 containerd[1568]: 2026-03-10 01:33:20.627 [INFO][4786] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.696345 containerd[1568]: 2026-03-10 01:33:20.629 [INFO][4786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f37c81cc1b ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.696345 containerd[1568]: 2026-03-10 01:33:20.643 [INFO][4786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.696345 containerd[1568]: 2026-03-10 01:33:20.644 [INFO][4786] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--bsdj4-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"19ece181-70dc-4566-932d-df7e48989fd7", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf", Pod:"coredns-7d764666f9-bsdj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f37c81cc1b", MAC:"1a:c3:fc:ac:08:db", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:20.696345 containerd[1568]: 2026-03-10 01:33:20.681 [INFO][4786] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" Namespace="kube-system" Pod="coredns-7d764666f9-bsdj4" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--bsdj4-eth0" Mar 10 01:33:20.700964 systemd-networkd[1474]: calia81f4f9fc1e: Gained IPv6LL Mar 10 01:33:20.781224 systemd-networkd[1474]: calid113ad87394: Link UP Mar 10 01:33:20.782983 systemd-networkd[1474]: calid113ad87394: Gained carrier Mar 10 01:33:20.817551 containerd[1568]: time="2026-03-10T01:33:20.817271194Z" level=info msg="connecting to shim ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf" address="unix:///run/containerd/s/656a9cdc2ff16c8e13de315a02f589f77322f4082cf67b8b52fb4b66ea48c991" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.113 [INFO][4783] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0 calico-apiserver-7ff485cc5f- calico-system 8bb68f61-585f-4b44-94f1-afbdee8dd54f 951 0 2026-03-10 01:32:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7ff485cc5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7ff485cc5f-cdn75 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calid113ad87394 [] [] }} ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.114 [INFO][4783] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.287 [INFO][4823] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" HandleID="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Workload="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.307 [INFO][4823] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" HandleID="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Workload="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001147e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-7ff485cc5f-cdn75", "timestamp":"2026-03-10 01:33:20.287687288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001a02c0)} Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.307 [INFO][4823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.621 [INFO][4823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.621 [INFO][4823] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.634 [INFO][4823] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.658 [INFO][4823] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.692 [INFO][4823] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.700 [INFO][4823] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.709 [INFO][4823] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.710 [INFO][4823] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.718 [INFO][4823] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803 Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.730 [INFO][4823] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.754 [INFO][4823] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.754 [INFO][4823] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" host="localhost" Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.754 [INFO][4823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:20.879011 containerd[1568]: 2026-03-10 01:33:20.754 [INFO][4823] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" HandleID="k8s-pod-network.7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Workload="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.881202 containerd[1568]: 2026-03-10 01:33:20.763 [INFO][4783] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0", GenerateName:"calico-apiserver-7ff485cc5f-", Namespace:"calico-system", SelfLink:"", UID:"8bb68f61-585f-4b44-94f1-afbdee8dd54f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff485cc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7ff485cc5f-cdn75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid113ad87394", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:20.881202 containerd[1568]: 2026-03-10 01:33:20.763 [INFO][4783] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.881202 containerd[1568]: 2026-03-10 01:33:20.763 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid113ad87394 ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.881202 containerd[1568]: 2026-03-10 01:33:20.783 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.881202 containerd[1568]: 2026-03-10 01:33:20.786 [INFO][4783] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0", GenerateName:"calico-apiserver-7ff485cc5f-", Namespace:"calico-system", SelfLink:"", UID:"8bb68f61-585f-4b44-94f1-afbdee8dd54f", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7ff485cc5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803", Pod:"calico-apiserver-7ff485cc5f-cdn75", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calid113ad87394", MAC:"2a:ec:2f:ed:83:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:20.881202 containerd[1568]: 2026-03-10 01:33:20.858 [INFO][4783] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" Namespace="calico-system" Pod="calico-apiserver-7ff485cc5f-cdn75" WorkloadEndpoint="localhost-k8s-calico--apiserver--7ff485cc5f--cdn75-eth0" Mar 10 01:33:20.879397 systemd[1]: Started cri-containerd-ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf.scope - libcontainer container ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf. Mar 10 01:33:21.183888 kubelet[2837]: E0310 01:33:21.182398 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:21.192491 containerd[1568]: time="2026-03-10T01:33:21.192189436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-sb4dn,Uid:c11513cf-a76c-4fa1-a5ad-bd942108eb0e,Namespace:kube-system,Attempt:0,}" Mar 10 01:33:21.314733 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:21.328074 containerd[1568]: time="2026-03-10T01:33:21.328009936Z" level=info msg="connecting to shim 7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803" address="unix:///run/containerd/s/eb7ad213bbe59f5ecfdcc1840a247f1d1a571326e2aaacb9db1e702d0f4e0619" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:21.444383 containerd[1568]: time="2026-03-10T01:33:21.444044263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-bsdj4,Uid:19ece181-70dc-4566-932d-df7e48989fd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf\"" Mar 10 01:33:21.448302 kubelet[2837]: E0310 01:33:21.448213 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:21.459925 containerd[1568]: time="2026-03-10T01:33:21.459717002Z" level=info msg="CreateContainer within sandbox \"ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:33:21.474102 systemd[1]: Started cri-containerd-7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803.scope - libcontainer container 7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803. Mar 10 01:33:21.538530 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:21.586406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2771520407.mount: Deactivated successfully. Mar 10 01:33:21.625008 containerd[1568]: time="2026-03-10T01:33:21.624740394Z" level=info msg="Container 836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:21.659924 containerd[1568]: time="2026-03-10T01:33:21.658756448Z" level=info msg="CreateContainer within sandbox \"ef9967a8bb4f580c582d17ba538d3f746c66f8dd7ecbe68687fafaa27ff314cf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e\"" Mar 10 01:33:21.680500 containerd[1568]: time="2026-03-10T01:33:21.667122153Z" level=info msg="StartContainer for \"836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e\"" Mar 10 01:33:21.688667 containerd[1568]: time="2026-03-10T01:33:21.688123070Z" level=info msg="connecting to shim 836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e" address="unix:///run/containerd/s/656a9cdc2ff16c8e13de315a02f589f77322f4082cf67b8b52fb4b66ea48c991" protocol=ttrpc version=3 Mar 10 01:33:21.725518 containerd[1568]: time="2026-03-10T01:33:21.725060345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7ff485cc5f-cdn75,Uid:8bb68f61-585f-4b44-94f1-afbdee8dd54f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803\"" Mar 10 01:33:21.778396 systemd[1]: Started cri-containerd-836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e.scope - libcontainer container 836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e. Mar 10 01:33:21.856762 systemd-networkd[1474]: cali6f37c81cc1b: Gained IPv6LL Mar 10 01:33:21.948988 kubelet[2837]: E0310 01:33:21.944099 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:21.983135 systemd-networkd[1474]: cali4e45af52ef5: Link UP Mar 10 01:33:21.983496 systemd-networkd[1474]: cali4e45af52ef5: Gained carrier Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.424 [INFO][4911] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--sb4dn-eth0 coredns-7d764666f9- kube-system c11513cf-a76c-4fa1-a5ad-bd942108eb0e 955 0 2026-03-10 01:32:09 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-sb4dn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e45af52ef5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.424 [INFO][4911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.587 [INFO][4973] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" HandleID="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Workload="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.663 [INFO][4973] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" HandleID="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Workload="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019fb40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-sb4dn", "timestamp":"2026-03-10 01:33:21.587818913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000330f20)} Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.663 [INFO][4973] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.664 [INFO][4973] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.664 [INFO][4973] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.692 [INFO][4973] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.730 [INFO][4973] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.759 [INFO][4973] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.780 [INFO][4973] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.791 [INFO][4973] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.792 [INFO][4973] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.800 [INFO][4973] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1 Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.817 [INFO][4973] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.845 [INFO][4973] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.847 [INFO][4973] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" host="localhost" Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.847 [INFO][4973] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 10 01:33:22.039447 containerd[1568]: 2026-03-10 01:33:21.847 [INFO][4973] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" HandleID="k8s-pod-network.3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Workload="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.050474 containerd[1568]: 2026-03-10 01:33:21.861 [INFO][4911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--sb4dn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"c11513cf-a76c-4fa1-a5ad-bd942108eb0e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-sb4dn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e45af52ef5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:22.050474 containerd[1568]: 2026-03-10 01:33:21.948 [INFO][4911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.050474 containerd[1568]: 2026-03-10 01:33:21.951 [INFO][4911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e45af52ef5 ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.050474 containerd[1568]: 2026-03-10 01:33:21.984 [INFO][4911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.050474 containerd[1568]: 2026-03-10 01:33:21.989 [INFO][4911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--sb4dn-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"c11513cf-a76c-4fa1-a5ad-bd942108eb0e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.March, 10, 1, 32, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1", Pod:"coredns-7d764666f9-sb4dn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e45af52ef5", MAC:"42:88:ae:9a:d2:09", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 10 01:33:22.050474 containerd[1568]: 2026-03-10 01:33:22.026 [INFO][4911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" Namespace="kube-system" Pod="coredns-7d764666f9-sb4dn" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--sb4dn-eth0" Mar 10 01:33:22.075459 containerd[1568]: time="2026-03-10T01:33:22.073768215Z" level=info msg="StartContainer for \"836c63482f9af4a4740026fa57727a5e34c042822c4eacf3f77fdeae07e6068e\" returns successfully" Mar 10 01:33:22.311896 containerd[1568]: time="2026-03-10T01:33:22.311197993Z" level=info msg="connecting to shim 3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1" address="unix:///run/containerd/s/a7e9740e7a248468d43bc343943ea41a2fccf0ca2f12a73c88cf036645726c95" namespace=k8s.io protocol=ttrpc version=3 Mar 10 01:33:22.519329 systemd[1]: Started cri-containerd-3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1.scope - libcontainer container 3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1. Mar 10 01:33:22.585721 systemd-resolved[1390]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 10 01:33:22.683922 containerd[1568]: time="2026-03-10T01:33:22.683745869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-sb4dn,Uid:c11513cf-a76c-4fa1-a5ad-bd942108eb0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1\"" Mar 10 01:33:22.687146 kubelet[2837]: E0310 01:33:22.686423 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:22.697659 containerd[1568]: time="2026-03-10T01:33:22.697423673Z" level=info msg="CreateContainer within sandbox \"3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 10 01:33:22.744805 containerd[1568]: time="2026-03-10T01:33:22.743828014Z" level=info msg="Container 9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:22.750689 systemd-networkd[1474]: calid113ad87394: Gained IPv6LL Mar 10 01:33:22.753680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062661417.mount: Deactivated successfully. Mar 10 01:33:22.797747 containerd[1568]: time="2026-03-10T01:33:22.796183933Z" level=info msg="CreateContainer within sandbox \"3225fa26a63105f364a71bf61c63341cb60ce205c4dc4855cb26ed138f44d4d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789\"" Mar 10 01:33:22.813646 containerd[1568]: time="2026-03-10T01:33:22.813154546Z" level=info msg="StartContainer for \"9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789\"" Mar 10 01:33:22.836748 kubelet[2837]: E0310 01:33:22.836030 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:22.844104 containerd[1568]: time="2026-03-10T01:33:22.844027275Z" level=info msg="connecting to shim 9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789" address="unix:///run/containerd/s/a7e9740e7a248468d43bc343943ea41a2fccf0ca2f12a73c88cf036645726c95" protocol=ttrpc version=3 Mar 10 01:33:22.927928 kubelet[2837]: I0310 01:33:22.924983 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-bsdj4" podStartSLOduration=73.924964374 podStartE2EDuration="1m13.924964374s" podCreationTimestamp="2026-03-10 01:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:33:22.914776395 +0000 UTC m=+78.309891162" watchObservedRunningTime="2026-03-10 01:33:22.924964374 +0000 UTC m=+78.320079132" Mar 10 01:33:22.975148 systemd[1]: Started cri-containerd-9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789.scope - libcontainer container 9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789. Mar 10 01:33:23.069906 systemd-networkd[1474]: cali4e45af52ef5: Gained IPv6LL Mar 10 01:33:23.140770 containerd[1568]: time="2026-03-10T01:33:23.140247882Z" level=info msg="StartContainer for \"9ce9401ea20680de640f826dfe184f2ec5e7044bf42e0ef2b113cad8b1f35789\" returns successfully" Mar 10 01:33:23.902751 kubelet[2837]: E0310 01:33:23.902374 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:23.907721 kubelet[2837]: E0310 01:33:23.906534 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:23.958209 kubelet[2837]: I0310 01:33:23.957948 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-sb4dn" podStartSLOduration=74.957933897 podStartE2EDuration="1m14.957933897s" podCreationTimestamp="2026-03-10 01:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-10 01:33:23.957439807 +0000 UTC m=+79.352554543" watchObservedRunningTime="2026-03-10 01:33:23.957933897 +0000 UTC m=+79.353048635" Mar 10 01:33:24.923284 kubelet[2837]: E0310 01:33:24.920352 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:24.923284 kubelet[2837]: E0310 01:33:24.921257 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:25.932949 kubelet[2837]: E0310 01:33:25.929789 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:26.462290 containerd[1568]: time="2026-03-10T01:33:26.461147343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:26.473097 containerd[1568]: time="2026-03-10T01:33:26.467776096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 10 01:33:26.477408 containerd[1568]: time="2026-03-10T01:33:26.477230660Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:26.488182 containerd[1568]: time="2026-03-10T01:33:26.488082959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:26.490458 containerd[1568]: time="2026-03-10T01:33:26.490120612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 7.456594716s" Mar 10 01:33:26.490458 containerd[1568]: time="2026-03-10T01:33:26.490207032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 01:33:26.504637 containerd[1568]: time="2026-03-10T01:33:26.504513155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 10 01:33:26.527766 containerd[1568]: time="2026-03-10T01:33:26.526368337Z" level=info msg="CreateContainer within sandbox \"dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 01:33:26.579073 containerd[1568]: time="2026-03-10T01:33:26.576269733Z" level=info msg="Container faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:26.642380 containerd[1568]: time="2026-03-10T01:33:26.640389084Z" level=info msg="CreateContainer within sandbox \"dfcfeacb7999823c232e4ee2943eceb89a79af9633145bd14a1a855dd7e4825e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce\"" Mar 10 01:33:26.645920 containerd[1568]: time="2026-03-10T01:33:26.645176314Z" level=info msg="StartContainer for \"faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce\"" Mar 10 01:33:26.654184 containerd[1568]: time="2026-03-10T01:33:26.650371000Z" level=info msg="connecting to shim faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce" address="unix:///run/containerd/s/36ba8dcf107cf2c4a510e040e35eee7754b9531011f14f6ca76a3c1b7a83b3be" protocol=ttrpc version=3 Mar 10 01:33:26.777441 systemd[1]: Started cri-containerd-faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce.scope - libcontainer container faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce. Mar 10 01:33:27.073018 containerd[1568]: time="2026-03-10T01:33:27.072529784Z" level=info msg="StartContainer for \"faa7524fbd5f7bbf6d3a77dcc63e55977339a06cb97aafaa58533f5ab3134cce\" returns successfully" Mar 10 01:33:27.977158 containerd[1568]: time="2026-03-10T01:33:27.969826111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 10 01:33:27.985644 containerd[1568]: time="2026-03-10T01:33:27.979786479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:27.995219 containerd[1568]: time="2026-03-10T01:33:27.995083728Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:28.007202 containerd[1568]: time="2026-03-10T01:33:28.002828331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:28.007491 containerd[1568]: time="2026-03-10T01:33:28.006920588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.50221303s" Mar 10 01:33:28.007706 containerd[1568]: time="2026-03-10T01:33:28.007681844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 10 01:33:28.019787 containerd[1568]: time="2026-03-10T01:33:28.018678951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 10 01:33:28.062737 containerd[1568]: time="2026-03-10T01:33:28.062559092Z" level=info msg="CreateContainer within sandbox \"86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 10 01:33:28.082778 kubelet[2837]: I0310 01:33:28.082657 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7ff485cc5f-x8658" podStartSLOduration=56.607473231 podStartE2EDuration="1m4.082645073s" podCreationTimestamp="2026-03-10 01:32:24 +0000 UTC" firstStartedPulling="2026-03-10 01:33:19.026056831 +0000 UTC m=+74.421171568" lastFinishedPulling="2026-03-10 01:33:26.501228673 +0000 UTC m=+81.896343410" observedRunningTime="2026-03-10 01:33:28.08177848 +0000 UTC m=+83.476893217" watchObservedRunningTime="2026-03-10 01:33:28.082645073 +0000 UTC m=+83.477759809" Mar 10 01:33:28.113279 containerd[1568]: time="2026-03-10T01:33:28.113028571Z" level=info msg="Container 1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:28.119023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1019571731.mount: Deactivated successfully. Mar 10 01:33:28.151654 containerd[1568]: time="2026-03-10T01:33:28.151467778Z" level=info msg="CreateContainer within sandbox \"86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16\"" Mar 10 01:33:28.155533 containerd[1568]: time="2026-03-10T01:33:28.155360531Z" level=info msg="StartContainer for \"1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16\"" Mar 10 01:33:28.165225 containerd[1568]: time="2026-03-10T01:33:28.165029156Z" level=info msg="connecting to shim 1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16" address="unix:///run/containerd/s/d781966961fb082d098978aaf712307bf0e065f425ade82aeb3279203e4177e3" protocol=ttrpc version=3 Mar 10 01:33:28.230038 systemd[1]: Started cri-containerd-1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16.scope - libcontainer container 1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16. Mar 10 01:33:28.548813 update_engine[1556]: I20260310 01:33:28.548234 1556 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 10 01:33:28.552078 update_engine[1556]: I20260310 01:33:28.551817 1556 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 10 01:33:28.563904 update_engine[1556]: I20260310 01:33:28.563293 1556 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 10 01:33:28.566964 update_engine[1556]: I20260310 01:33:28.566707 1556 omaha_request_params.cc:62] Current group set to stable Mar 10 01:33:28.567112 update_engine[1556]: I20260310 01:33:28.567085 1556 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 10 01:33:28.567910 update_engine[1556]: I20260310 01:33:28.567366 1556 update_attempter.cc:643] Scheduling an action processor start. Mar 10 01:33:28.567910 update_engine[1556]: I20260310 01:33:28.567464 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:33:28.567910 update_engine[1556]: I20260310 01:33:28.567512 1556 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 10 01:33:28.572437 update_engine[1556]: I20260310 01:33:28.568810 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:33:28.572555 update_engine[1556]: I20260310 01:33:28.572527 1556 omaha_request_action.cc:272] Request: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.572555 update_engine[1556]: Mar 10 01:33:28.573058 update_engine[1556]: I20260310 01:33:28.573033 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:33:28.611229 locksmithd[1602]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 10 01:33:28.614816 update_engine[1556]: I20260310 01:33:28.614203 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:33:28.616235 update_engine[1556]: I20260310 01:33:28.616189 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:33:28.634134 containerd[1568]: time="2026-03-10T01:33:28.633424559Z" level=info msg="StartContainer for \"1940e881542b8c7507926b4303f0df769d8caf628aaad8a5edbedae8e3de2b16\" returns successfully" Mar 10 01:33:28.642218 update_engine[1556]: E20260310 01:33:28.641309 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:33:28.643201 update_engine[1556]: I20260310 01:33:28.642371 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 10 01:33:30.088787 kubelet[2837]: I0310 01:33:30.081938 2837 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Mar 10 01:33:32.152338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653845423.mount: Deactivated successfully. Mar 10 01:33:34.132891 containerd[1568]: time="2026-03-10T01:33:34.132419178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:34.135947 containerd[1568]: time="2026-03-10T01:33:34.135811241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 10 01:33:34.139147 containerd[1568]: time="2026-03-10T01:33:34.138427824Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:34.143341 containerd[1568]: time="2026-03-10T01:33:34.143207139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:34.144265 containerd[1568]: time="2026-03-10T01:33:34.144184030Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 6.12543102s" Mar 10 01:33:34.144265 containerd[1568]: time="2026-03-10T01:33:34.144218314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 10 01:33:34.147277 containerd[1568]: time="2026-03-10T01:33:34.147248159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 10 01:33:34.157376 containerd[1568]: time="2026-03-10T01:33:34.157045208Z" level=info msg="CreateContainer within sandbox \"09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 10 01:33:34.196298 containerd[1568]: time="2026-03-10T01:33:34.193553297Z" level=info msg="Container 0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:34.318961 containerd[1568]: time="2026-03-10T01:33:34.318231694Z" level=info msg="CreateContainer within sandbox \"09ab9ea22bc4d90472e98fd22b0006dfea6bc99d0e0074dd86d3822d44b37d7f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7\"" Mar 10 01:33:34.324703 containerd[1568]: time="2026-03-10T01:33:34.324345336Z" level=info msg="StartContainer for \"0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7\"" Mar 10 01:33:34.329993 containerd[1568]: time="2026-03-10T01:33:34.329391309Z" level=info msg="connecting to shim 0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7" address="unix:///run/containerd/s/bdbbbd8463149bb06d5ff0bfe5cba9d233d6b20a8e9a643814e0a652eab41826" protocol=ttrpc version=3 Mar 10 01:33:34.424535 systemd[1]: Started cri-containerd-0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7.scope - libcontainer container 0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7. Mar 10 01:33:34.622321 containerd[1568]: time="2026-03-10T01:33:34.622147257Z" level=info msg="StartContainer for \"0161f68a7d83168a94496ca10b603441c667d8dab27843342a9af2a4726beea7\" returns successfully" Mar 10 01:33:37.906991 kubelet[2837]: E0310 01:33:37.905721 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:37.982892 kubelet[2837]: I0310 01:33:37.977833 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-b8tsw" podStartSLOduration=58.337207131 podStartE2EDuration="1m12.977816165s" podCreationTimestamp="2026-03-10 01:32:25 +0000 UTC" firstStartedPulling="2026-03-10 01:33:19.505510946 +0000 UTC m=+74.900625684" lastFinishedPulling="2026-03-10 01:33:34.146119981 +0000 UTC m=+89.541234718" observedRunningTime="2026-03-10 01:33:35.245786226 +0000 UTC m=+90.640900983" watchObservedRunningTime="2026-03-10 01:33:37.977816165 +0000 UTC m=+93.372930912" Mar 10 01:33:38.426375 update_engine[1556]: I20260310 01:33:38.422685 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:33:38.426375 update_engine[1556]: I20260310 01:33:38.426062 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:33:38.429812 update_engine[1556]: I20260310 01:33:38.427299 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:33:38.445504 update_engine[1556]: E20260310 01:33:38.445370 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:33:38.445726 update_engine[1556]: I20260310 01:33:38.445539 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 10 01:33:38.906449 kubelet[2837]: E0310 01:33:38.906168 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:39.905167 kubelet[2837]: E0310 01:33:39.903711 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:40.069262 containerd[1568]: time="2026-03-10T01:33:40.069169046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 10 01:33:40.081558 containerd[1568]: time="2026-03-10T01:33:40.081441270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:40.085341 containerd[1568]: time="2026-03-10T01:33:40.084071906Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:40.104684 containerd[1568]: time="2026-03-10T01:33:40.104504072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:40.114897 containerd[1568]: time="2026-03-10T01:33:40.114308114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.96603079s" Mar 10 01:33:40.114897 containerd[1568]: time="2026-03-10T01:33:40.114394315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 10 01:33:40.116504 containerd[1568]: time="2026-03-10T01:33:40.116471539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 10 01:33:40.176447 containerd[1568]: time="2026-03-10T01:33:40.175490143Z" level=info msg="CreateContainer within sandbox \"746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 10 01:33:40.223549 containerd[1568]: time="2026-03-10T01:33:40.223398369Z" level=info msg="Container 39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:40.245480 containerd[1568]: time="2026-03-10T01:33:40.245397809Z" level=info msg="CreateContainer within sandbox \"746cbe496c0a90222f0db23795e0d690efba39aff3cd660f4cca5cd223031d1a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9\"" Mar 10 01:33:40.248069 containerd[1568]: time="2026-03-10T01:33:40.248035111Z" level=info msg="StartContainer for \"39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9\"" Mar 10 01:33:40.253122 containerd[1568]: time="2026-03-10T01:33:40.253035111Z" level=info msg="connecting to shim 39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9" address="unix:///run/containerd/s/341449a4ef45f84853bceec89374c42ae1c50528b6ac4de9301e69a40c28990c" protocol=ttrpc version=3 Mar 10 01:33:40.319547 containerd[1568]: time="2026-03-10T01:33:40.319443552Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:40.322348 containerd[1568]: time="2026-03-10T01:33:40.322255668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 10 01:33:40.340065 systemd[1]: Started cri-containerd-39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9.scope - libcontainer container 39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9. Mar 10 01:33:40.341757 containerd[1568]: time="2026-03-10T01:33:40.341626310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 225.055766ms" Mar 10 01:33:40.341757 containerd[1568]: time="2026-03-10T01:33:40.341666625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 10 01:33:40.344521 containerd[1568]: time="2026-03-10T01:33:40.344439909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 10 01:33:40.352484 containerd[1568]: time="2026-03-10T01:33:40.352407040Z" level=info msg="CreateContainer within sandbox \"7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 10 01:33:40.494816 containerd[1568]: time="2026-03-10T01:33:40.494734872Z" level=info msg="Container 91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:40.522285 containerd[1568]: time="2026-03-10T01:33:40.522195094Z" level=info msg="CreateContainer within sandbox \"7a8b90fafe83b95f6dc0249378fab44c9535e92f69d210a7fb762ae43aa54803\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9\"" Mar 10 01:33:40.525031 containerd[1568]: time="2026-03-10T01:33:40.525003669Z" level=info msg="StartContainer for \"91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9\"" Mar 10 01:33:40.528718 containerd[1568]: time="2026-03-10T01:33:40.527542507Z" level=info msg="connecting to shim 91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9" address="unix:///run/containerd/s/eb7ad213bbe59f5ecfdcc1840a247f1d1a571326e2aaacb9db1e702d0f4e0619" protocol=ttrpc version=3 Mar 10 01:33:40.625265 systemd[1]: Started cri-containerd-91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9.scope - libcontainer container 91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9. Mar 10 01:33:40.630303 containerd[1568]: time="2026-03-10T01:33:40.630259931Z" level=info msg="StartContainer for \"39bb8abb28f51ef60bf4a4d6c705054a0efde3029a4a36126689a7c76ba7beb9\" returns successfully" Mar 10 01:33:41.215767 containerd[1568]: time="2026-03-10T01:33:41.215478620Z" level=info msg="StartContainer for \"91e784da7e74b892b7932afe079bc8f718f4cd2f9f025434487a7d25bd2f14f9\" returns successfully" Mar 10 01:33:41.402975 kubelet[2837]: I0310 01:33:41.401174 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-545475ff5b-79bsc" podStartSLOduration=54.303995506 podStartE2EDuration="1m14.400537829s" podCreationTimestamp="2026-03-10 01:32:27 +0000 UTC" firstStartedPulling="2026-03-10 01:33:20.019805502 +0000 UTC m=+75.414920239" lastFinishedPulling="2026-03-10 01:33:40.116347825 +0000 UTC m=+95.511462562" observedRunningTime="2026-03-10 01:33:41.332841101 +0000 UTC m=+96.727955858" watchObservedRunningTime="2026-03-10 01:33:41.400537829 +0000 UTC m=+96.795652566" Mar 10 01:33:41.669740 kubelet[2837]: I0310 01:33:41.646744 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7ff485cc5f-cdn75" podStartSLOduration=59.038334987 podStartE2EDuration="1m17.646454471s" podCreationTimestamp="2026-03-10 01:32:24 +0000 UTC" firstStartedPulling="2026-03-10 01:33:21.73455045 +0000 UTC m=+77.129665188" lastFinishedPulling="2026-03-10 01:33:40.342669935 +0000 UTC m=+95.737784672" observedRunningTime="2026-03-10 01:33:41.4077397 +0000 UTC m=+96.802854437" watchObservedRunningTime="2026-03-10 01:33:41.646454471 +0000 UTC m=+97.041569308" Mar 10 01:33:42.309024 containerd[1568]: time="2026-03-10T01:33:42.305818158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:42.310921 containerd[1568]: time="2026-03-10T01:33:42.308739978Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 10 01:33:42.320552 containerd[1568]: time="2026-03-10T01:33:42.320451808Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:42.329308 containerd[1568]: time="2026-03-10T01:33:42.329206391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 10 01:33:42.332007 containerd[1568]: time="2026-03-10T01:33:42.331555695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.987038632s" Mar 10 01:33:42.332007 containerd[1568]: time="2026-03-10T01:33:42.331691057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 10 01:33:42.364653 containerd[1568]: time="2026-03-10T01:33:42.361908522Z" level=info msg="CreateContainer within sandbox \"86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 10 01:33:42.396313 containerd[1568]: time="2026-03-10T01:33:42.396265340Z" level=info msg="Container 314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551: CDI devices from CRI Config.CDIDevices: []" Mar 10 01:33:42.424128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount602618431.mount: Deactivated successfully. Mar 10 01:33:42.493931 containerd[1568]: time="2026-03-10T01:33:42.493760973Z" level=info msg="CreateContainer within sandbox \"86a75335ec2e7c866eca130dee589fd1bfe50b85a4bb25d23c023f29bcc10d48\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551\"" Mar 10 01:33:42.496638 containerd[1568]: time="2026-03-10T01:33:42.496260945Z" level=info msg="StartContainer for \"314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551\"" Mar 10 01:33:42.508186 containerd[1568]: time="2026-03-10T01:33:42.506316020Z" level=info msg="connecting to shim 314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551" address="unix:///run/containerd/s/d781966961fb082d098978aaf712307bf0e065f425ade82aeb3279203e4177e3" protocol=ttrpc version=3 Mar 10 01:33:42.643992 systemd[1]: Started cri-containerd-314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551.scope - libcontainer container 314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551. Mar 10 01:33:43.019637 containerd[1568]: time="2026-03-10T01:33:43.019050495Z" level=info msg="StartContainer for \"314a82a7700e3a830788375e1ae00dcf5da666da94edff62c88e2c6b8fb25551\" returns successfully" Mar 10 01:33:43.747031 kubelet[2837]: I0310 01:33:43.746993 2837 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 10 01:33:43.747031 kubelet[2837]: I0310 01:33:43.747032 2837 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 10 01:33:45.336971 kubelet[2837]: I0310 01:33:45.335237 2837 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-xlrx5" podStartSLOduration=55.24637537 podStartE2EDuration="1m18.335219873s" podCreationTimestamp="2026-03-10 01:32:27 +0000 UTC" firstStartedPulling="2026-03-10 01:33:19.250399174 +0000 UTC m=+74.645513911" lastFinishedPulling="2026-03-10 01:33:42.339243677 +0000 UTC m=+97.734358414" observedRunningTime="2026-03-10 01:33:43.391919012 +0000 UTC m=+98.787033769" watchObservedRunningTime="2026-03-10 01:33:45.335219873 +0000 UTC m=+100.730334609" Mar 10 01:33:48.439034 update_engine[1556]: I20260310 01:33:48.438737 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:33:48.443181 update_engine[1556]: I20260310 01:33:48.441478 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:33:48.447675 update_engine[1556]: I20260310 01:33:48.447477 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:33:48.468050 update_engine[1556]: E20260310 01:33:48.467785 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:33:48.468222 update_engine[1556]: I20260310 01:33:48.468138 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 10 01:33:52.006697 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:37958.service - OpenSSH per-connection server daemon (10.0.0.1:37958). Mar 10 01:33:52.260217 sshd[5565]: Accepted publickey for core from 10.0.0.1 port 37958 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:33:52.264402 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:52.278685 systemd-logind[1543]: New session 10 of user core. Mar 10 01:33:52.287672 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 10 01:33:53.042019 sshd[5579]: Connection closed by 10.0.0.1 port 37958 Mar 10 01:33:53.042330 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:53.056210 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:37958.service: Deactivated successfully. Mar 10 01:33:53.062682 systemd[1]: session-10.scope: Deactivated successfully. Mar 10 01:33:53.068077 systemd-logind[1543]: Session 10 logged out. Waiting for processes to exit. Mar 10 01:33:53.071793 systemd-logind[1543]: Removed session 10. Mar 10 01:33:55.904084 kubelet[2837]: E0310 01:33:55.903996 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:33:58.079188 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:37964.service - OpenSSH per-connection server daemon (10.0.0.1:37964). Mar 10 01:33:58.293458 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 37964 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:33:58.297989 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:33:58.319701 systemd-logind[1543]: New session 11 of user core. Mar 10 01:33:58.336843 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 10 01:33:58.420295 update_engine[1556]: I20260310 01:33:58.420133 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:33:58.421168 update_engine[1556]: I20260310 01:33:58.420651 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:33:58.422816 update_engine[1556]: I20260310 01:33:58.421481 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:33:58.485926 update_engine[1556]: E20260310 01:33:58.475101 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:33:58.486124 update_engine[1556]: I20260310 01:33:58.486016 1556 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:33:58.486124 update_engine[1556]: I20260310 01:33:58.486038 1556 omaha_request_action.cc:617] Omaha request response: Mar 10 01:33:58.499325 update_engine[1556]: E20260310 01:33:58.499090 1556 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 10 01:33:58.537405 update_engine[1556]: I20260310 01:33:58.537176 1556 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 10 01:33:58.537405 update_engine[1556]: I20260310 01:33:58.537338 1556 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:33:58.537405 update_engine[1556]: I20260310 01:33:58.537354 1556 update_attempter.cc:306] Processing Done. Mar 10 01:33:58.537405 update_engine[1556]: E20260310 01:33:58.537397 1556 update_attempter.cc:619] Update failed. Mar 10 01:33:58.537405 update_engine[1556]: I20260310 01:33:58.537413 1556 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 10 01:33:58.538111 update_engine[1556]: I20260310 01:33:58.537422 1556 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 10 01:33:58.538111 update_engine[1556]: I20260310 01:33:58.537432 1556 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 10 01:33:58.538111 update_engine[1556]: I20260310 01:33:58.537539 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 10 01:33:58.541157 locksmithd[1602]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 10 01:33:58.541718 update_engine[1556]: I20260310 01:33:58.541522 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 10 01:33:58.541718 update_engine[1556]: I20260310 01:33:58.541553 1556 omaha_request_action.cc:272] Request: Mar 10 01:33:58.541718 update_engine[1556]: Mar 10 01:33:58.541718 update_engine[1556]: Mar 10 01:33:58.541718 update_engine[1556]: Mar 10 01:33:58.541718 update_engine[1556]: Mar 10 01:33:58.541718 update_engine[1556]: Mar 10 01:33:58.541718 update_engine[1556]: Mar 10 01:33:58.543209 update_engine[1556]: I20260310 01:33:58.541714 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 10 01:33:58.543209 update_engine[1556]: I20260310 01:33:58.541761 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 10 01:33:58.543209 update_engine[1556]: I20260310 01:33:58.542371 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 10 01:33:58.564379 update_engine[1556]: E20260310 01:33:58.564281 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564419 1556 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564431 1556 omaha_request_action.cc:617] Omaha request response: Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564443 1556 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564452 1556 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564462 1556 update_attempter.cc:306] Processing Done. Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564471 1556 update_attempter.cc:310] Error event sent. Mar 10 01:33:58.564535 update_engine[1556]: I20260310 01:33:58.564486 1556 update_check_scheduler.cc:74] Next update check in 40m11s Mar 10 01:33:58.567228 locksmithd[1602]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 10 01:33:58.922496 sshd[5602]: Connection closed by 10.0.0.1 port 37964 Mar 10 01:33:58.923099 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Mar 10 01:33:58.936266 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:37964.service: Deactivated successfully. Mar 10 01:33:58.939522 systemd[1]: session-11.scope: Deactivated successfully. Mar 10 01:33:58.946727 systemd-logind[1543]: Session 11 logged out. Waiting for processes to exit. Mar 10 01:33:58.952084 systemd-logind[1543]: Removed session 11. Mar 10 01:34:03.943783 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:50908.service - OpenSSH per-connection server daemon (10.0.0.1:50908). Mar 10 01:34:04.215706 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 50908 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:04.218947 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:04.237149 systemd-logind[1543]: New session 12 of user core. Mar 10 01:34:04.248941 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 10 01:34:04.615180 sshd[5625]: Connection closed by 10.0.0.1 port 50908 Mar 10 01:34:04.616848 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:04.634510 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:50908.service: Deactivated successfully. Mar 10 01:34:04.638433 systemd[1]: session-12.scope: Deactivated successfully. Mar 10 01:34:04.643112 systemd-logind[1543]: Session 12 logged out. Waiting for processes to exit. Mar 10 01:34:04.646288 systemd-logind[1543]: Removed session 12. Mar 10 01:34:09.648969 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:53512.service - OpenSSH per-connection server daemon (10.0.0.1:53512). Mar 10 01:34:09.868273 sshd[5738]: Accepted publickey for core from 10.0.0.1 port 53512 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:09.881112 sshd-session[5738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:09.902027 systemd-logind[1543]: New session 13 of user core. Mar 10 01:34:09.926017 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 10 01:34:10.212322 sshd[5741]: Connection closed by 10.0.0.1 port 53512 Mar 10 01:34:10.211713 sshd-session[5738]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:10.226176 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:53512.service: Deactivated successfully. Mar 10 01:34:10.231722 systemd[1]: session-13.scope: Deactivated successfully. Mar 10 01:34:10.240434 systemd-logind[1543]: Session 13 logged out. Waiting for processes to exit. Mar 10 01:34:10.245119 systemd-logind[1543]: Removed session 13. Mar 10 01:34:15.239779 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:53516.service - OpenSSH per-connection server daemon (10.0.0.1:53516). Mar 10 01:34:15.370170 sshd[5783]: Accepted publickey for core from 10.0.0.1 port 53516 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:15.372531 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:15.385332 systemd-logind[1543]: New session 14 of user core. Mar 10 01:34:15.401804 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 10 01:34:15.602050 sshd[5786]: Connection closed by 10.0.0.1 port 53516 Mar 10 01:34:15.604839 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:15.614352 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:53516.service: Deactivated successfully. Mar 10 01:34:15.621195 systemd[1]: session-14.scope: Deactivated successfully. Mar 10 01:34:15.624790 systemd-logind[1543]: Session 14 logged out. Waiting for processes to exit. Mar 10 01:34:15.629694 systemd-logind[1543]: Removed session 14. Mar 10 01:34:20.626877 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:53312.service - OpenSSH per-connection server daemon (10.0.0.1:53312). Mar 10 01:34:20.701716 sshd[5802]: Accepted publickey for core from 10.0.0.1 port 53312 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:20.702833 sshd-session[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:20.715093 systemd-logind[1543]: New session 15 of user core. Mar 10 01:34:20.731072 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 10 01:34:20.902064 sshd[5805]: Connection closed by 10.0.0.1 port 53312 Mar 10 01:34:20.902443 sshd-session[5802]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:20.909866 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:53312.service: Deactivated successfully. Mar 10 01:34:20.913904 systemd[1]: session-15.scope: Deactivated successfully. Mar 10 01:34:20.920042 systemd-logind[1543]: Session 15 logged out. Waiting for processes to exit. Mar 10 01:34:20.924362 systemd-logind[1543]: Removed session 15. Mar 10 01:34:25.936830 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:53314.service - OpenSSH per-connection server daemon (10.0.0.1:53314). Mar 10 01:34:26.037388 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 53314 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:26.041303 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:26.068249 systemd-logind[1543]: New session 16 of user core. Mar 10 01:34:26.087216 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 10 01:34:26.373123 sshd[5822]: Connection closed by 10.0.0.1 port 53314 Mar 10 01:34:26.374280 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:26.388191 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:53314.service: Deactivated successfully. Mar 10 01:34:26.395103 systemd[1]: session-16.scope: Deactivated successfully. Mar 10 01:34:26.398270 systemd-logind[1543]: Session 16 logged out. Waiting for processes to exit. Mar 10 01:34:26.404058 systemd-logind[1543]: Removed session 16. Mar 10 01:34:29.905002 kubelet[2837]: E0310 01:34:29.904419 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:31.411474 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:57704.service - OpenSSH per-connection server daemon (10.0.0.1:57704). Mar 10 01:34:31.711783 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 57704 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:31.714846 sshd-session[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:31.731187 systemd-logind[1543]: New session 17 of user core. Mar 10 01:34:31.743856 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 10 01:34:32.513266 sshd[5854]: Connection closed by 10.0.0.1 port 57704 Mar 10 01:34:32.514112 sshd-session[5837]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:32.529906 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:57704.service: Deactivated successfully. Mar 10 01:34:32.533217 systemd[1]: session-17.scope: Deactivated successfully. Mar 10 01:34:32.536860 systemd-logind[1543]: Session 17 logged out. Waiting for processes to exit. Mar 10 01:34:32.544455 systemd-logind[1543]: Removed session 17. Mar 10 01:34:32.548099 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:57712.service - OpenSSH per-connection server daemon (10.0.0.1:57712). Mar 10 01:34:32.805189 sshd[5868]: Accepted publickey for core from 10.0.0.1 port 57712 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:32.821778 sshd-session[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:32.842694 systemd-logind[1543]: New session 18 of user core. Mar 10 01:34:32.856936 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 10 01:34:33.440717 sshd[5871]: Connection closed by 10.0.0.1 port 57712 Mar 10 01:34:33.444893 sshd-session[5868]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:33.564019 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:57712.service: Deactivated successfully. Mar 10 01:34:33.575173 systemd[1]: session-18.scope: Deactivated successfully. Mar 10 01:34:33.598740 systemd-logind[1543]: Session 18 logged out. Waiting for processes to exit. Mar 10 01:34:33.604888 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:57724.service - OpenSSH per-connection server daemon (10.0.0.1:57724). Mar 10 01:34:33.610337 systemd-logind[1543]: Removed session 18. Mar 10 01:34:33.821708 sshd[5883]: Accepted publickey for core from 10.0.0.1 port 57724 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:33.828951 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:33.853682 systemd-logind[1543]: New session 19 of user core. Mar 10 01:34:33.863278 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 10 01:34:34.171543 sshd[5886]: Connection closed by 10.0.0.1 port 57724 Mar 10 01:34:34.177335 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:34.201786 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:57724.service: Deactivated successfully. Mar 10 01:34:34.211526 systemd[1]: session-19.scope: Deactivated successfully. Mar 10 01:34:34.218718 systemd-logind[1543]: Session 19 logged out. Waiting for processes to exit. Mar 10 01:34:34.221524 systemd-logind[1543]: Removed session 19. Mar 10 01:34:41.540428 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:47512.service - OpenSSH per-connection server daemon (10.0.0.1:47512). Mar 10 01:34:42.436784 sshd[5927]: Accepted publickey for core from 10.0.0.1 port 47512 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:42.439036 sshd-session[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:42.510677 systemd-logind[1543]: New session 20 of user core. Mar 10 01:34:42.516874 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 10 01:34:42.532661 kubelet[2837]: E0310 01:34:42.532226 2837 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.217s" Mar 10 01:34:42.537420 kubelet[2837]: E0310 01:34:42.537391 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:43.162048 sshd[5939]: Connection closed by 10.0.0.1 port 47512 Mar 10 01:34:43.163283 sshd-session[5927]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:43.212902 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:47512.service: Deactivated successfully. Mar 10 01:34:43.221902 systemd[1]: session-20.scope: Deactivated successfully. Mar 10 01:34:43.231660 systemd-logind[1543]: Session 20 logged out. Waiting for processes to exit. Mar 10 01:34:43.234681 systemd-logind[1543]: Removed session 20. Mar 10 01:34:43.908126 kubelet[2837]: E0310 01:34:43.904726 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:44.908638 kubelet[2837]: E0310 01:34:44.908298 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:48.202356 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:47518.service - OpenSSH per-connection server daemon (10.0.0.1:47518). Mar 10 01:34:48.438815 sshd[6036]: Accepted publickey for core from 10.0.0.1 port 47518 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:48.444144 sshd-session[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:48.465285 systemd-logind[1543]: New session 21 of user core. Mar 10 01:34:48.478155 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 10 01:34:48.805402 sshd[6054]: Connection closed by 10.0.0.1 port 47518 Mar 10 01:34:48.809835 sshd-session[6036]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:48.818257 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:47518.service: Deactivated successfully. Mar 10 01:34:48.822198 systemd[1]: session-21.scope: Deactivated successfully. Mar 10 01:34:48.825877 systemd-logind[1543]: Session 21 logged out. Waiting for processes to exit. Mar 10 01:34:48.831376 systemd-logind[1543]: Removed session 21. Mar 10 01:34:53.877535 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:60534.service - OpenSSH per-connection server daemon (10.0.0.1:60534). Mar 10 01:34:54.046319 sshd[6068]: Accepted publickey for core from 10.0.0.1 port 60534 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:54.051361 sshd-session[6068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:54.088151 systemd-logind[1543]: New session 22 of user core. Mar 10 01:34:54.097847 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 10 01:34:54.671519 sshd[6071]: Connection closed by 10.0.0.1 port 60534 Mar 10 01:34:54.696206 sshd-session[6068]: pam_unix(sshd:session): session closed for user core Mar 10 01:34:54.710225 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:60534.service: Deactivated successfully. Mar 10 01:34:54.714836 systemd[1]: session-22.scope: Deactivated successfully. Mar 10 01:34:54.719748 systemd-logind[1543]: Session 22 logged out. Waiting for processes to exit. Mar 10 01:34:54.723672 systemd-logind[1543]: Removed session 22. Mar 10 01:34:59.709504 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:41582.service - OpenSSH per-connection server daemon (10.0.0.1:41582). Mar 10 01:34:59.836856 sshd[6084]: Accepted publickey for core from 10.0.0.1 port 41582 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:34:59.840091 sshd-session[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:34:59.863930 systemd-logind[1543]: New session 23 of user core. Mar 10 01:34:59.881370 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 10 01:34:59.907290 kubelet[2837]: E0310 01:34:59.907200 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:34:59.910414 kubelet[2837]: E0310 01:34:59.909507 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:00.455884 sshd[6087]: Connection closed by 10.0.0.1 port 41582 Mar 10 01:35:00.456359 sshd-session[6084]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:00.476101 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:41582.service: Deactivated successfully. Mar 10 01:35:00.486418 systemd[1]: session-23.scope: Deactivated successfully. Mar 10 01:35:00.497859 systemd-logind[1543]: Session 23 logged out. Waiting for processes to exit. Mar 10 01:35:00.502292 systemd-logind[1543]: Removed session 23. Mar 10 01:35:05.494365 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:41584.service - OpenSSH per-connection server daemon (10.0.0.1:41584). Mar 10 01:35:05.617098 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 41584 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:05.623800 sshd-session[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:05.639153 systemd-logind[1543]: New session 24 of user core. Mar 10 01:35:05.659280 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 10 01:35:05.949826 sshd[6152]: Connection closed by 10.0.0.1 port 41584 Mar 10 01:35:05.951278 sshd-session[6149]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:05.964237 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:41584.service: Deactivated successfully. Mar 10 01:35:05.971807 systemd[1]: session-24.scope: Deactivated successfully. Mar 10 01:35:05.980996 systemd-logind[1543]: Session 24 logged out. Waiting for processes to exit. Mar 10 01:35:05.989989 systemd-logind[1543]: Removed session 24. Mar 10 01:35:07.908787 kubelet[2837]: E0310 01:35:07.908401 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 10 01:35:10.984957 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:49060.service - OpenSSH per-connection server daemon (10.0.0.1:49060). Mar 10 01:35:11.121222 sshd[6217]: Accepted publickey for core from 10.0.0.1 port 49060 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:11.124050 sshd-session[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:11.139286 systemd-logind[1543]: New session 25 of user core. Mar 10 01:35:11.147428 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 10 01:35:11.418725 sshd[6223]: Connection closed by 10.0.0.1 port 49060 Mar 10 01:35:11.419894 sshd-session[6217]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:11.434475 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:49060.service: Deactivated successfully. Mar 10 01:35:11.438214 systemd[1]: session-25.scope: Deactivated successfully. Mar 10 01:35:11.441015 systemd-logind[1543]: Session 25 logged out. Waiting for processes to exit. Mar 10 01:35:11.446402 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:49072.service - OpenSSH per-connection server daemon (10.0.0.1:49072). Mar 10 01:35:11.449174 systemd-logind[1543]: Removed session 25. Mar 10 01:35:11.553024 sshd[6262]: Accepted publickey for core from 10.0.0.1 port 49072 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:11.556533 sshd-session[6262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:11.591055 systemd-logind[1543]: New session 26 of user core. Mar 10 01:35:11.609697 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 10 01:35:12.525344 sshd[6265]: Connection closed by 10.0.0.1 port 49072 Mar 10 01:35:12.527229 sshd-session[6262]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:12.556795 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:49072.service: Deactivated successfully. Mar 10 01:35:12.562262 systemd[1]: session-26.scope: Deactivated successfully. Mar 10 01:35:12.564525 systemd-logind[1543]: Session 26 logged out. Waiting for processes to exit. Mar 10 01:35:12.575951 systemd[1]: Started sshd@26-10.0.0.12:22-10.0.0.1:49082.service - OpenSSH per-connection server daemon (10.0.0.1:49082). Mar 10 01:35:12.579375 systemd-logind[1543]: Removed session 26. Mar 10 01:35:12.791446 sshd[6278]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:12.794747 sshd-session[6278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:12.815505 systemd-logind[1543]: New session 27 of user core. Mar 10 01:35:12.829061 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 10 01:35:15.752046 sshd[6283]: Connection closed by 10.0.0.1 port 49082 Mar 10 01:35:15.753490 sshd-session[6278]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:15.790418 systemd[1]: Started sshd@27-10.0.0.12:22-10.0.0.1:49084.service - OpenSSH per-connection server daemon (10.0.0.1:49084). Mar 10 01:35:15.802454 systemd[1]: sshd@26-10.0.0.12:22-10.0.0.1:49082.service: Deactivated successfully. Mar 10 01:35:15.807819 systemd[1]: session-27.scope: Deactivated successfully. Mar 10 01:35:15.819262 systemd-logind[1543]: Session 27 logged out. Waiting for processes to exit. Mar 10 01:35:15.826189 systemd-logind[1543]: Removed session 27. Mar 10 01:35:15.924546 sshd[6306]: Accepted publickey for core from 10.0.0.1 port 49084 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:15.929726 sshd-session[6306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:15.943175 systemd-logind[1543]: New session 28 of user core. Mar 10 01:35:15.960786 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 10 01:35:17.053858 sshd[6312]: Connection closed by 10.0.0.1 port 49084 Mar 10 01:35:17.057542 sshd-session[6306]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:17.095950 systemd[1]: Started sshd@28-10.0.0.12:22-10.0.0.1:49098.service - OpenSSH per-connection server daemon (10.0.0.1:49098). Mar 10 01:35:17.096901 systemd[1]: sshd@27-10.0.0.12:22-10.0.0.1:49084.service: Deactivated successfully. Mar 10 01:35:17.115474 systemd[1]: session-28.scope: Deactivated successfully. Mar 10 01:35:17.120961 systemd-logind[1543]: Session 28 logged out. Waiting for processes to exit. Mar 10 01:35:17.129286 systemd-logind[1543]: Removed session 28. Mar 10 01:35:17.295256 sshd[6323]: Accepted publickey for core from 10.0.0.1 port 49098 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:17.297450 sshd-session[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:17.309057 systemd-logind[1543]: New session 29 of user core. Mar 10 01:35:17.319006 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 10 01:35:17.508298 sshd[6329]: Connection closed by 10.0.0.1 port 49098 Mar 10 01:35:17.508946 sshd-session[6323]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:17.520456 systemd[1]: sshd@28-10.0.0.12:22-10.0.0.1:49098.service: Deactivated successfully. Mar 10 01:35:17.526839 systemd[1]: session-29.scope: Deactivated successfully. Mar 10 01:35:17.533080 systemd-logind[1543]: Session 29 logged out. Waiting for processes to exit. Mar 10 01:35:17.538199 systemd-logind[1543]: Removed session 29. Mar 10 01:35:22.537707 systemd[1]: Started sshd@29-10.0.0.12:22-10.0.0.1:58826.service - OpenSSH per-connection server daemon (10.0.0.1:58826). Mar 10 01:35:22.685420 sshd[6343]: Accepted publickey for core from 10.0.0.1 port 58826 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:22.687743 sshd-session[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:22.709982 systemd-logind[1543]: New session 30 of user core. Mar 10 01:35:22.720881 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 10 01:35:22.969881 sshd[6346]: Connection closed by 10.0.0.1 port 58826 Mar 10 01:35:22.968304 sshd-session[6343]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:22.990821 systemd[1]: sshd@29-10.0.0.12:22-10.0.0.1:58826.service: Deactivated successfully. Mar 10 01:35:22.999180 systemd[1]: session-30.scope: Deactivated successfully. Mar 10 01:35:23.004412 systemd-logind[1543]: Session 30 logged out. Waiting for processes to exit. Mar 10 01:35:23.013301 systemd-logind[1543]: Removed session 30. Mar 10 01:35:28.004503 systemd[1]: Started sshd@30-10.0.0.12:22-10.0.0.1:58828.service - OpenSSH per-connection server daemon (10.0.0.1:58828). Mar 10 01:35:28.121113 sshd[6361]: Accepted publickey for core from 10.0.0.1 port 58828 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:28.124065 sshd-session[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:28.144270 systemd-logind[1543]: New session 31 of user core. Mar 10 01:35:28.169822 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 10 01:35:28.389734 sshd[6364]: Connection closed by 10.0.0.1 port 58828 Mar 10 01:35:28.389989 sshd-session[6361]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:28.408508 systemd[1]: sshd@30-10.0.0.12:22-10.0.0.1:58828.service: Deactivated successfully. Mar 10 01:35:28.408522 systemd-logind[1543]: Session 31 logged out. Waiting for processes to exit. Mar 10 01:35:28.412220 systemd[1]: session-31.scope: Deactivated successfully. Mar 10 01:35:28.416113 systemd-logind[1543]: Removed session 31. Mar 10 01:35:33.426934 systemd[1]: Started sshd@31-10.0.0.12:22-10.0.0.1:60454.service - OpenSSH per-connection server daemon (10.0.0.1:60454). Mar 10 01:35:33.538966 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 60454 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:33.543283 sshd-session[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:33.558208 systemd-logind[1543]: New session 32 of user core. Mar 10 01:35:33.568079 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 10 01:35:33.801327 sshd[6382]: Connection closed by 10.0.0.1 port 60454 Mar 10 01:35:33.801465 sshd-session[6379]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:33.808862 systemd-logind[1543]: Session 32 logged out. Waiting for processes to exit. Mar 10 01:35:33.809728 systemd[1]: sshd@31-10.0.0.12:22-10.0.0.1:60454.service: Deactivated successfully. Mar 10 01:35:33.814750 systemd[1]: session-32.scope: Deactivated successfully. Mar 10 01:35:33.820937 systemd-logind[1543]: Removed session 32. Mar 10 01:35:38.829064 systemd[1]: Started sshd@32-10.0.0.12:22-10.0.0.1:46116.service - OpenSSH per-connection server daemon (10.0.0.1:46116). Mar 10 01:35:38.947224 sshd[6443]: Accepted publickey for core from 10.0.0.1 port 46116 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:38.955771 sshd-session[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:38.983424 systemd-logind[1543]: New session 33 of user core. Mar 10 01:35:39.001464 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 10 01:35:39.371311 sshd[6446]: Connection closed by 10.0.0.1 port 46116 Mar 10 01:35:39.371913 sshd-session[6443]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:39.386824 systemd[1]: sshd@32-10.0.0.12:22-10.0.0.1:46116.service: Deactivated successfully. Mar 10 01:35:39.392794 systemd[1]: session-33.scope: Deactivated successfully. Mar 10 01:35:39.396522 systemd-logind[1543]: Session 33 logged out. Waiting for processes to exit. Mar 10 01:35:39.401519 systemd-logind[1543]: Removed session 33. Mar 10 01:35:44.423836 systemd[1]: Started sshd@33-10.0.0.12:22-10.0.0.1:46122.service - OpenSSH per-connection server daemon (10.0.0.1:46122). Mar 10 01:35:44.608351 sshd[6485]: Accepted publickey for core from 10.0.0.1 port 46122 ssh2: RSA SHA256:7ZzKSK/M+RmhnyiMo84y3Zwp+Rnqzep2WFGqVIx00zY Mar 10 01:35:44.608055 sshd-session[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 10 01:35:44.626702 systemd-logind[1543]: New session 34 of user core. Mar 10 01:35:44.646466 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 10 01:35:45.082043 sshd[6489]: Connection closed by 10.0.0.1 port 46122 Mar 10 01:35:45.081850 sshd-session[6485]: pam_unix(sshd:session): session closed for user core Mar 10 01:35:45.095677 systemd[1]: sshd@33-10.0.0.12:22-10.0.0.1:46122.service: Deactivated successfully. Mar 10 01:35:45.102077 systemd[1]: session-34.scope: Deactivated successfully. Mar 10 01:35:45.110886 systemd-logind[1543]: Session 34 logged out. Waiting for processes to exit. Mar 10 01:35:45.113928 systemd-logind[1543]: Removed session 34. Mar 10 01:35:45.907695 kubelet[2837]: E0310 01:35:45.907485 2837 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"