Apr 17 00:06:53.924414 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 16 22:00:21 -00 2026 Apr 17 00:06:53.924454 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:06:53.924468 kernel: BIOS-provided physical RAM map: Apr 17 00:06:53.924478 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 17 00:06:53.924488 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 17 00:06:53.924498 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 00:06:53.924513 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 17 00:06:53.924523 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 17 00:06:53.924532 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 00:06:53.924541 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 00:06:53.924551 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 00:06:53.924561 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 00:06:53.924571 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 17 00:06:53.924581 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 17 00:06:53.924597 kernel: NX (Execute Disable) protection: active Apr 17 00:06:53.924608 kernel: APIC: Static calls initialized Apr 17 00:06:53.924618 kernel: SMBIOS 2.8 present. Apr 17 00:06:53.924629 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 17 00:06:53.924639 kernel: DMI: Memory slots populated: 1/1 Apr 17 00:06:53.924649 kernel: Hypervisor detected: KVM Apr 17 00:06:53.924663 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 17 00:06:53.924674 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 00:06:53.924684 kernel: kvm-clock: using sched offset of 7298016899 cycles Apr 17 00:06:53.924696 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 00:06:53.924707 kernel: tsc: Detected 1999.997 MHz processor Apr 17 00:06:53.924719 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 00:06:53.924730 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 00:06:53.924740 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 17 00:06:53.924751 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 00:06:53.924761 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 00:06:53.924775 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 17 00:06:53.924785 kernel: Using GB pages for direct mapping Apr 17 00:06:53.924795 kernel: ACPI: Early table checksum verification disabled Apr 17 00:06:53.924805 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 17 00:06:53.924816 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924826 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924837 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924846 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 17 00:06:53.924856 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924870 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924884 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924895 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 00:06:53.924906 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 17 00:06:53.924917 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 17 00:06:53.924926 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 17 00:06:53.924932 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 17 00:06:53.924939 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 17 00:06:53.924946 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 17 00:06:53.924952 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 17 00:06:53.924959 kernel: No NUMA configuration found Apr 17 00:06:53.924965 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 17 00:06:53.924972 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Apr 17 00:06:53.924979 kernel: Zone ranges: Apr 17 00:06:53.924988 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 00:06:53.924995 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 17 00:06:53.925007 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 17 00:06:53.925018 kernel: Device empty Apr 17 00:06:53.925028 kernel: Movable zone start for each node Apr 17 00:06:53.925038 kernel: Early memory node ranges Apr 17 00:06:53.925049 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 00:06:53.925059 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 17 00:06:53.925145 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 17 00:06:53.925159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 17 00:06:53.925174 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 00:06:53.925185 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 00:06:53.925192 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 17 00:06:53.925199 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 00:06:53.925206 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 00:06:53.925212 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 00:06:53.925219 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 00:06:53.925226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 00:06:53.925233 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 00:06:53.925242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 00:06:53.925248 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 00:06:53.925255 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 00:06:53.925262 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 00:06:53.925268 kernel: TSC deadline timer available Apr 17 00:06:53.925275 kernel: CPU topo: Max. logical packages: 1 Apr 17 00:06:53.925281 kernel: CPU topo: Max. logical dies: 1 Apr 17 00:06:53.925288 kernel: CPU topo: Max. dies per package: 1 Apr 17 00:06:53.925294 kernel: CPU topo: Max. threads per core: 1 Apr 17 00:06:53.925303 kernel: CPU topo: Num. cores per package: 2 Apr 17 00:06:53.925310 kernel: CPU topo: Num. threads per package: 2 Apr 17 00:06:53.925316 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 17 00:06:53.925323 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 00:06:53.925330 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 00:06:53.925336 kernel: kvm-guest: setup PV sched yield Apr 17 00:06:53.925343 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 00:06:53.925349 kernel: Booting paravirtualized kernel on KVM Apr 17 00:06:53.925356 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 00:06:53.925365 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 00:06:53.925372 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 17 00:06:53.925379 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 17 00:06:53.925385 kernel: pcpu-alloc: [0] 0 1 Apr 17 00:06:53.925392 kernel: kvm-guest: PV spinlocks enabled Apr 17 00:06:53.925398 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 00:06:53.925406 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:06:53.925413 kernel: random: crng init done Apr 17 00:06:53.925422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 00:06:53.925428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 00:06:53.925435 kernel: Fallback order for Node 0: 0 Apr 17 00:06:53.925442 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Apr 17 00:06:53.925448 kernel: Policy zone: Normal Apr 17 00:06:53.925455 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 00:06:53.925461 kernel: software IO TLB: area num 2. Apr 17 00:06:53.925468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 00:06:53.925475 kernel: ftrace: allocating 40126 entries in 157 pages Apr 17 00:06:53.925483 kernel: ftrace: allocated 157 pages with 5 groups Apr 17 00:06:53.925490 kernel: Dynamic Preempt: voluntary Apr 17 00:06:53.925497 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 00:06:53.925504 kernel: rcu: RCU event tracing is enabled. Apr 17 00:06:53.925511 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 00:06:53.925518 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 00:06:53.925525 kernel: Rude variant of Tasks RCU enabled. Apr 17 00:06:53.925531 kernel: Tracing variant of Tasks RCU enabled. Apr 17 00:06:53.925538 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 00:06:53.925545 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 00:06:53.925554 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 00:06:53.925567 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 00:06:53.925577 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 00:06:53.925584 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 00:06:53.925591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 00:06:53.925597 kernel: Console: colour VGA+ 80x25 Apr 17 00:06:53.925604 kernel: printk: legacy console [tty0] enabled Apr 17 00:06:53.925611 kernel: printk: legacy console [ttyS0] enabled Apr 17 00:06:53.925619 kernel: ACPI: Core revision 20240827 Apr 17 00:06:53.925628 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 00:06:53.925635 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 00:06:53.925642 kernel: x2apic enabled Apr 17 00:06:53.925648 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 00:06:53.925655 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 00:06:53.925663 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 00:06:53.925669 kernel: kvm-guest: setup PV IPIs Apr 17 00:06:53.925679 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 00:06:53.925686 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Apr 17 00:06:53.925693 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999997) Apr 17 00:06:53.925700 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 00:06:53.925707 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 17 00:06:53.925714 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 17 00:06:53.925721 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 00:06:53.925728 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 00:06:53.925735 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 00:06:53.925744 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 17 00:06:53.925751 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 17 00:06:53.925758 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 17 00:06:53.925765 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 17 00:06:53.925773 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 17 00:06:53.925780 kernel: active return thunk: srso_alias_return_thunk Apr 17 00:06:53.925787 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 17 00:06:53.925794 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 17 00:06:53.925803 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 00:06:53.925810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 00:06:53.925817 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 00:06:53.925824 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 00:06:53.925831 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 00:06:53.925838 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 00:06:53.925844 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 17 00:06:53.925851 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 17 00:06:53.925858 kernel: Freeing SMP alternatives memory: 32K Apr 17 00:06:53.925867 kernel: pid_max: default: 32768 minimum: 301 Apr 17 00:06:53.925874 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 17 00:06:53.925881 kernel: landlock: Up and running. Apr 17 00:06:53.925888 kernel: SELinux: Initializing. Apr 17 00:06:53.925895 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 00:06:53.925902 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 00:06:53.925909 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 17 00:06:53.925916 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 17 00:06:53.925923 kernel: ... version: 0 Apr 17 00:06:53.925932 kernel: ... bit width: 48 Apr 17 00:06:53.925939 kernel: ... generic registers: 6 Apr 17 00:06:53.925946 kernel: ... value mask: 0000ffffffffffff Apr 17 00:06:53.925953 kernel: ... max period: 00007fffffffffff Apr 17 00:06:53.925959 kernel: ... fixed-purpose events: 0 Apr 17 00:06:53.925966 kernel: ... event mask: 000000000000003f Apr 17 00:06:53.925973 kernel: signal: max sigframe size: 3376 Apr 17 00:06:53.925980 kernel: rcu: Hierarchical SRCU implementation. Apr 17 00:06:53.925987 kernel: rcu: Max phase no-delay instances is 400. Apr 17 00:06:53.925996 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 17 00:06:53.926003 kernel: smp: Bringing up secondary CPUs ... Apr 17 00:06:53.926010 kernel: smpboot: x86: Booting SMP configuration: Apr 17 00:06:53.926017 kernel: .... node #0, CPUs: #1 Apr 17 00:06:53.926024 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 00:06:53.926031 kernel: smpboot: Total of 2 processors activated (7999.98 BogoMIPS) Apr 17 00:06:53.926038 kernel: Memory: 3953608K/4193772K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46216K init, 2532K bss, 235480K reserved, 0K cma-reserved) Apr 17 00:06:53.926045 kernel: devtmpfs: initialized Apr 17 00:06:53.926052 kernel: x86/mm: Memory block size: 128MB Apr 17 00:06:53.926061 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 00:06:53.926068 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 00:06:53.926075 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 00:06:53.926094 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 00:06:53.926101 kernel: audit: initializing netlink subsys (disabled) Apr 17 00:06:53.926108 kernel: audit: type=2000 audit(1776384411.166:1): state=initialized audit_enabled=0 res=1 Apr 17 00:06:53.926115 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 00:06:53.926122 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 00:06:53.926129 kernel: cpuidle: using governor menu Apr 17 00:06:53.926138 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 00:06:53.926145 kernel: dca service started, version 1.12.1 Apr 17 00:06:53.926152 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 17 00:06:53.926159 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 00:06:53.926166 kernel: PCI: Using configuration type 1 for base access Apr 17 00:06:53.926173 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 00:06:53.926180 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 00:06:53.926187 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 00:06:53.926194 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 00:06:53.926203 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 00:06:53.926210 kernel: ACPI: Added _OSI(Module Device) Apr 17 00:06:53.926217 kernel: ACPI: Added _OSI(Processor Device) Apr 17 00:06:53.926224 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 00:06:53.926231 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 00:06:53.926237 kernel: ACPI: Interpreter enabled Apr 17 00:06:53.926244 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 00:06:53.926251 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 00:06:53.926258 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 00:06:53.926267 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 00:06:53.928942 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 00:06:53.928960 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 00:06:53.929270 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 00:06:53.929463 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 00:06:53.929648 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 00:06:53.929666 kernel: PCI host bridge to bus 0000:00 Apr 17 00:06:53.929848 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 00:06:53.931152 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 00:06:53.931319 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 00:06:53.931467 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 17 00:06:53.932202 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 00:06:53.932361 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 17 00:06:53.932486 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 00:06:53.932675 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 17 00:06:53.932855 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 17 00:06:53.933013 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 17 00:06:53.933190 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 17 00:06:53.933358 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 17 00:06:53.933518 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 00:06:53.933699 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Apr 17 00:06:53.933874 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Apr 17 00:06:53.934031 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 17 00:06:53.936215 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 00:06:53.936361 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 17 00:06:53.936489 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Apr 17 00:06:53.936613 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 17 00:06:53.936767 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 00:06:53.936949 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 17 00:06:53.938187 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 17 00:06:53.938427 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 00:06:53.938637 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 17 00:06:53.938809 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Apr 17 00:06:53.938946 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Apr 17 00:06:53.939118 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 17 00:06:53.939280 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 17 00:06:53.939296 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 00:06:53.939309 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 00:06:53.939321 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 00:06:53.939333 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 00:06:53.939345 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 00:06:53.939357 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 00:06:53.939374 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 00:06:53.939382 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 00:06:53.939389 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 00:06:53.939396 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 00:06:53.939403 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 00:06:53.939410 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 00:06:53.939417 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 00:06:53.939424 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 00:06:53.939433 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 00:06:53.939449 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 00:06:53.939461 kernel: iommu: Default domain type: Translated Apr 17 00:06:53.939473 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 00:06:53.939485 kernel: PCI: Using ACPI for IRQ routing Apr 17 00:06:53.939497 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 00:06:53.939509 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 17 00:06:53.939518 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 17 00:06:53.939686 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 00:06:53.939861 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 00:06:53.940031 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 00:06:53.940046 kernel: vgaarb: loaded Apr 17 00:06:53.940054 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 00:06:53.940066 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 00:06:53.940078 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 00:06:53.943115 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 00:06:53.943144 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 00:06:53.943152 kernel: pnp: PnP ACPI init Apr 17 00:06:53.943306 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 00:06:53.943318 kernel: pnp: PnP ACPI: found 5 devices Apr 17 00:06:53.943326 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 00:06:53.943334 kernel: NET: Registered PF_INET protocol family Apr 17 00:06:53.943341 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 00:06:53.943348 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 00:06:53.943356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 00:06:53.943363 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 00:06:53.943373 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 00:06:53.943381 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 00:06:53.943388 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 00:06:53.943395 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 00:06:53.943402 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 00:06:53.943410 kernel: NET: Registered PF_XDP protocol family Apr 17 00:06:53.943527 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 00:06:53.943685 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 00:06:53.943800 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 00:06:53.943916 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 17 00:06:53.944027 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 00:06:53.944159 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 17 00:06:53.944170 kernel: PCI: CLS 0 bytes, default 64 Apr 17 00:06:53.944178 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 17 00:06:53.944185 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 17 00:06:53.944193 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a856ed927, max_idle_ns: 881590446804 ns Apr 17 00:06:53.944200 kernel: Initialise system trusted keyrings Apr 17 00:06:53.944211 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 00:06:53.944219 kernel: Key type asymmetric registered Apr 17 00:06:53.944226 kernel: Asymmetric key parser 'x509' registered Apr 17 00:06:53.944233 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 17 00:06:53.944240 kernel: io scheduler mq-deadline registered Apr 17 00:06:53.944247 kernel: io scheduler kyber registered Apr 17 00:06:53.944254 kernel: io scheduler bfq registered Apr 17 00:06:53.944261 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 00:06:53.944270 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 00:06:53.944279 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 00:06:53.944287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 00:06:53.944295 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 00:06:53.944302 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 00:06:53.944309 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 00:06:53.944316 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 00:06:53.944324 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 00:06:53.944455 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 17 00:06:53.944574 kernel: rtc_cmos 00:03: registered as rtc0 Apr 17 00:06:53.944692 kernel: rtc_cmos 00:03: setting system clock to 2026-04-17T00:06:53 UTC (1776384413) Apr 17 00:06:53.944806 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 00:06:53.944816 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 17 00:06:53.944824 kernel: NET: Registered PF_INET6 protocol family Apr 17 00:06:53.944831 kernel: Segment Routing with IPv6 Apr 17 00:06:53.944839 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 00:06:53.944846 kernel: NET: Registered PF_PACKET protocol family Apr 17 00:06:53.944853 kernel: Key type dns_resolver registered Apr 17 00:06:53.944863 kernel: IPI shorthand broadcast: enabled Apr 17 00:06:53.944871 kernel: sched_clock: Marking stable (2914005053, 348135059)->(3354403881, -92263769) Apr 17 00:06:53.944878 kernel: registered taskstats version 1 Apr 17 00:06:53.944885 kernel: Loading compiled-in X.509 certificates Apr 17 00:06:53.944893 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 92f69eed5a22c94634d5240e5e65306547d4ba83' Apr 17 00:06:53.944900 kernel: Demotion targets for Node 0: null Apr 17 00:06:53.944907 kernel: Key type .fscrypt registered Apr 17 00:06:53.944914 kernel: Key type fscrypt-provisioning registered Apr 17 00:06:53.944922 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 00:06:53.944931 kernel: ima: Allocated hash algorithm: sha1 Apr 17 00:06:53.944939 kernel: ima: No architecture policies found Apr 17 00:06:53.944946 kernel: clk: Disabling unused clocks Apr 17 00:06:53.944953 kernel: Warning: unable to open an initial console. Apr 17 00:06:53.944961 kernel: Freeing unused kernel image (initmem) memory: 46216K Apr 17 00:06:53.944968 kernel: Write protecting the kernel read-only data: 40960k Apr 17 00:06:53.944975 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 17 00:06:53.944983 kernel: Run /init as init process Apr 17 00:06:53.944990 kernel: with arguments: Apr 17 00:06:53.945000 kernel: /init Apr 17 00:06:53.945007 kernel: with environment: Apr 17 00:06:53.945029 kernel: HOME=/ Apr 17 00:06:53.945039 kernel: TERM=linux Apr 17 00:06:53.945047 systemd[1]: Successfully made /usr/ read-only. Apr 17 00:06:53.945058 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 00:06:53.945066 systemd[1]: Detected virtualization kvm. Apr 17 00:06:53.945076 systemd[1]: Detected architecture x86-64. Apr 17 00:06:53.946139 systemd[1]: Running in initrd. Apr 17 00:06:53.946148 systemd[1]: No hostname configured, using default hostname. Apr 17 00:06:53.946157 systemd[1]: Hostname set to . Apr 17 00:06:53.946165 systemd[1]: Initializing machine ID from random generator. Apr 17 00:06:53.946173 systemd[1]: Queued start job for default target initrd.target. Apr 17 00:06:53.946180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:06:53.946188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:06:53.946200 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 00:06:53.946208 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 00:06:53.946216 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 00:06:53.946225 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 00:06:53.946234 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 00:06:53.946242 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 00:06:53.946250 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:06:53.946261 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:06:53.946268 systemd[1]: Reached target paths.target - Path Units. Apr 17 00:06:53.946276 systemd[1]: Reached target slices.target - Slice Units. Apr 17 00:06:53.946284 systemd[1]: Reached target swap.target - Swaps. Apr 17 00:06:53.946292 systemd[1]: Reached target timers.target - Timer Units. Apr 17 00:06:53.946300 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 00:06:53.946308 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 00:06:53.946316 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 00:06:53.946324 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 17 00:06:53.946334 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:06:53.946342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 00:06:53.946354 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:06:53.946362 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 00:06:53.946370 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 00:06:53.946380 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 00:06:53.946388 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 00:06:53.946397 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 17 00:06:53.946405 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 00:06:53.946413 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 00:06:53.946421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 00:06:53.946429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:06:53.946460 systemd-journald[187]: Collecting audit messages is disabled. Apr 17 00:06:53.946493 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 00:06:53.946510 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:06:53.946518 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 00:06:53.946527 systemd-journald[187]: Journal started Apr 17 00:06:53.946544 systemd-journald[187]: Runtime Journal (/run/log/journal/cb60ca7c83324ae9ab5ecae8f7f91ebe) is 8M, max 78.2M, 70.2M free. Apr 17 00:06:53.941706 systemd-modules-load[188]: Inserted module 'overlay' Apr 17 00:06:53.957167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 00:06:53.980100 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 00:06:53.980142 kernel: Bridge firewalling registered Apr 17 00:06:53.979590 systemd-modules-load[188]: Inserted module 'br_netfilter' Apr 17 00:06:54.074965 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 00:06:54.076058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 00:06:54.077205 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:06:54.078820 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:06:54.083925 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 00:06:54.088202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 00:06:54.099940 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 00:06:54.106211 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 00:06:54.113489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:06:54.121042 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:06:54.123499 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 17 00:06:54.126361 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 00:06:54.129204 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 00:06:54.130728 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:06:54.138887 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 00:06:54.153373 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=f73cf1d40ab12c6181d739932b2133dbe986804f7665fccb580a411e6eed38d9 Apr 17 00:06:54.187424 systemd-resolved[226]: Positive Trust Anchors: Apr 17 00:06:54.187438 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 00:06:54.187483 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 00:06:54.192471 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 17 00:06:54.196539 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 00:06:54.197802 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:06:54.254156 kernel: SCSI subsystem initialized Apr 17 00:06:54.264210 kernel: Loading iSCSI transport class v2.0-870. Apr 17 00:06:54.278121 kernel: iscsi: registered transport (tcp) Apr 17 00:06:54.302384 kernel: iscsi: registered transport (qla4xxx) Apr 17 00:06:54.302439 kernel: QLogic iSCSI HBA Driver Apr 17 00:06:54.327036 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 00:06:54.345800 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:06:54.349154 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 00:06:54.400586 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 00:06:54.404334 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 00:06:54.459105 kernel: raid6: avx2x4 gen() 24971 MB/s Apr 17 00:06:54.477129 kernel: raid6: avx2x2 gen() 23777 MB/s Apr 17 00:06:54.495219 kernel: raid6: avx2x1 gen() 17541 MB/s Apr 17 00:06:54.495263 kernel: raid6: using algorithm avx2x4 gen() 24971 MB/s Apr 17 00:06:54.515476 kernel: raid6: .... xor() 4546 MB/s, rmw enabled Apr 17 00:06:54.515513 kernel: raid6: using avx2x2 recovery algorithm Apr 17 00:06:54.538114 kernel: xor: automatically using best checksumming function avx Apr 17 00:06:54.678119 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 00:06:54.685426 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 00:06:54.687766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:06:54.716954 systemd-udevd[435]: Using default interface naming scheme 'v255'. Apr 17 00:06:54.723899 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:06:54.728160 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 00:06:54.755710 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Apr 17 00:06:54.784474 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 00:06:54.787187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 00:06:54.867971 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:06:54.870245 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 00:06:55.083117 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Apr 17 00:06:55.095123 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 00:06:55.095145 kernel: scsi host0: Virtio SCSI HBA Apr 17 00:06:55.095336 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 17 00:06:55.131118 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 17 00:06:55.136108 kernel: AES CTR mode by8 optimization enabled Apr 17 00:06:55.137114 kernel: libata version 3.00 loaded. Apr 17 00:06:55.155065 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 00:06:55.155308 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 00:06:55.168696 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 17 00:06:55.168889 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 17 00:06:55.192863 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 00:06:55.201112 kernel: scsi host1: ahci Apr 17 00:06:55.207143 kernel: scsi host2: ahci Apr 17 00:06:55.210161 kernel: scsi host3: ahci Apr 17 00:06:55.210561 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 00:06:55.210638 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:06:55.213678 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:06:55.218252 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:06:55.220381 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:06:55.258368 kernel: scsi host4: ahci Apr 17 00:06:55.258613 kernel: scsi host5: ahci Apr 17 00:06:55.259154 kernel: scsi host6: ahci Apr 17 00:06:55.259313 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 1 Apr 17 00:06:55.259325 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 1 Apr 17 00:06:55.259336 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 1 Apr 17 00:06:55.259351 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 1 Apr 17 00:06:55.259361 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 1 Apr 17 00:06:55.259371 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 1 Apr 17 00:06:55.259381 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 17 00:06:55.259552 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 17 00:06:55.259731 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 17 00:06:55.259921 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 17 00:06:55.260073 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 17 00:06:55.265675 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 00:06:55.265701 kernel: GPT:9289727 != 167739391 Apr 17 00:06:55.265712 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 00:06:55.268330 kernel: GPT:9289727 != 167739391 Apr 17 00:06:55.271332 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 00:06:55.271372 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 00:06:55.293189 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 17 00:06:55.384473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:06:55.538103 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 00:06:55.538170 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 00:06:55.538182 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 00:06:55.539118 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 00:06:55.541113 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 17 00:06:55.544111 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 00:06:55.613220 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 17 00:06:55.623904 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 17 00:06:55.624995 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 00:06:55.633834 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 17 00:06:55.634647 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 17 00:06:55.645187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 00:06:55.646871 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 00:06:55.647683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:06:55.649417 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 00:06:55.651738 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 00:06:55.656201 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 00:06:55.671060 disk-uuid[612]: Primary Header is updated. Apr 17 00:06:55.671060 disk-uuid[612]: Secondary Entries is updated. Apr 17 00:06:55.671060 disk-uuid[612]: Secondary Header is updated. Apr 17 00:06:55.678460 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 00:06:55.681578 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 00:06:55.696146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 00:06:56.697902 disk-uuid[615]: The operation has completed successfully. Apr 17 00:06:56.698877 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 17 00:06:56.755380 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 00:06:56.755542 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 00:06:56.793276 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 00:06:56.809743 sh[634]: Success Apr 17 00:06:56.829205 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 00:06:56.829243 kernel: device-mapper: uevent: version 1.0.3 Apr 17 00:06:56.831118 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 17 00:06:56.847287 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 17 00:06:56.895814 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 00:06:56.898408 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 00:06:56.906024 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 00:06:56.918103 kernel: BTRFS: device fsid d1542dca-1171-4bcf-9aae-d85dd05fe503 devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (646) Apr 17 00:06:56.922608 kernel: BTRFS info (device dm-0): first mount of filesystem d1542dca-1171-4bcf-9aae-d85dd05fe503 Apr 17 00:06:56.922632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:06:56.937163 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 17 00:06:56.937189 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 17 00:06:56.937202 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 17 00:06:56.941697 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 00:06:56.942992 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 17 00:06:56.944300 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 00:06:56.945158 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 00:06:56.949587 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 00:06:56.978128 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (679) Apr 17 00:06:56.982459 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:06:56.982486 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:06:56.995562 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 00:06:56.995594 kernel: BTRFS info (device sda6): turning on async discard Apr 17 00:06:56.995607 kernel: BTRFS info (device sda6): enabling free space tree Apr 17 00:06:57.005117 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:06:57.006027 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 00:06:57.009449 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 00:06:57.100549 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 00:06:57.108200 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 00:06:57.120771 ignition[744]: Ignition 2.22.0 Apr 17 00:06:57.120803 ignition[744]: Stage: fetch-offline Apr 17 00:06:57.120838 ignition[744]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:06:57.120848 ignition[744]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:06:57.120920 ignition[744]: parsed url from cmdline: "" Apr 17 00:06:57.120925 ignition[744]: no config URL provided Apr 17 00:06:57.120930 ignition[744]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 00:06:57.125378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 00:06:57.120938 ignition[744]: no config at "/usr/lib/ignition/user.ign" Apr 17 00:06:57.120943 ignition[744]: failed to fetch config: resource requires networking Apr 17 00:06:57.121067 ignition[744]: Ignition finished successfully Apr 17 00:06:57.150482 systemd-networkd[821]: lo: Link UP Apr 17 00:06:57.150495 systemd-networkd[821]: lo: Gained carrier Apr 17 00:06:57.152274 systemd-networkd[821]: Enumeration completed Apr 17 00:06:57.152353 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 00:06:57.153486 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:06:57.153490 systemd-networkd[821]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 00:06:57.154869 systemd[1]: Reached target network.target - Network. Apr 17 00:06:57.155661 systemd-networkd[821]: eth0: Link UP Apr 17 00:06:57.155867 systemd-networkd[821]: eth0: Gained carrier Apr 17 00:06:57.155876 systemd-networkd[821]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:06:57.159195 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 00:06:57.182955 ignition[825]: Ignition 2.22.0 Apr 17 00:06:57.182971 ignition[825]: Stage: fetch Apr 17 00:06:57.183097 ignition[825]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:06:57.183109 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:06:57.183186 ignition[825]: parsed url from cmdline: "" Apr 17 00:06:57.183191 ignition[825]: no config URL provided Apr 17 00:06:57.183196 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 00:06:57.183204 ignition[825]: no config at "/usr/lib/ignition/user.ign" Apr 17 00:06:57.183225 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 17 00:06:57.183786 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 00:06:57.384254 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 17 00:06:57.384421 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 00:06:57.785006 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 17 00:06:57.785210 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 17 00:06:57.884155 systemd-networkd[821]: eth0: DHCPv4 address 172.238.171.230/24, gateway 172.238.171.1 acquired from 23.205.167.124 Apr 17 00:06:58.586271 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 17 00:06:58.680629 ignition[825]: PUT result: OK Apr 17 00:06:58.680685 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 17 00:06:58.789303 ignition[825]: GET result: OK Apr 17 00:06:58.789457 ignition[825]: parsing config with SHA512: f3a42f99c178474fb2b9158ae60699c48d7d4e9bdda0e1dbb356ec5bf5cd0ad255af50359d96d3e56d3547a24d8a8df4f65796a95ebfd0b37573bb70ff550d15 Apr 17 00:06:58.792858 unknown[825]: fetched base config from "system" Apr 17 00:06:58.792876 unknown[825]: fetched base config from "system" Apr 17 00:06:58.793818 ignition[825]: fetch: fetch complete Apr 17 00:06:58.792885 unknown[825]: fetched user config from "akamai" Apr 17 00:06:58.793826 ignition[825]: fetch: fetch passed Apr 17 00:06:58.797399 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 00:06:58.793884 ignition[825]: Ignition finished successfully Apr 17 00:06:58.820206 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 00:06:58.849354 ignition[832]: Ignition 2.22.0 Apr 17 00:06:58.850161 ignition[832]: Stage: kargs Apr 17 00:06:58.850294 ignition[832]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:06:58.850306 ignition[832]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:06:58.851281 ignition[832]: kargs: kargs passed Apr 17 00:06:58.853063 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 00:06:58.851322 ignition[832]: Ignition finished successfully Apr 17 00:06:58.856255 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 00:06:58.885586 ignition[838]: Ignition 2.22.0 Apr 17 00:06:58.885602 ignition[838]: Stage: disks Apr 17 00:06:58.885719 ignition[838]: no configs at "/usr/lib/ignition/base.d" Apr 17 00:06:58.885729 ignition[838]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:06:58.888349 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 00:06:58.886307 ignition[838]: disks: disks passed Apr 17 00:06:58.889750 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 00:06:58.886351 ignition[838]: Ignition finished successfully Apr 17 00:06:58.891215 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 00:06:58.892649 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 00:06:58.893991 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 00:06:58.895556 systemd[1]: Reached target basic.target - Basic System. Apr 17 00:06:58.897857 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 00:06:58.921989 systemd-fsck[846]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 17 00:06:58.927155 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 00:06:58.929026 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 00:06:59.007187 systemd-networkd[821]: eth0: Gained IPv6LL Apr 17 00:06:59.038104 kernel: EXT4-fs (sda9): mounted filesystem ee420a69-62b9-42f4-84c7-ea3f2d87c569 r/w with ordered data mode. Quota mode: none. Apr 17 00:06:59.038970 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 00:06:59.040393 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 00:06:59.042635 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 00:06:59.045155 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 00:06:59.046899 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 00:06:59.048707 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 00:06:59.049660 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 00:06:59.056177 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 00:06:59.059222 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 00:06:59.065107 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (854) Apr 17 00:06:59.069483 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:06:59.069505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:06:59.078253 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 00:06:59.078278 kernel: BTRFS info (device sda6): turning on async discard Apr 17 00:06:59.080543 kernel: BTRFS info (device sda6): enabling free space tree Apr 17 00:06:59.086033 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 00:06:59.123211 initrd-setup-root[878]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 00:06:59.128015 initrd-setup-root[885]: cut: /sysroot/etc/group: No such file or directory Apr 17 00:06:59.134103 initrd-setup-root[892]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 00:06:59.139164 initrd-setup-root[899]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 00:06:59.234969 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 00:06:59.237208 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 00:06:59.239384 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 00:06:59.254453 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 00:06:59.258745 kernel: BTRFS info (device sda6): last unmount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:06:59.274275 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 00:06:59.289108 ignition[967]: INFO : Ignition 2.22.0 Apr 17 00:06:59.289108 ignition[967]: INFO : Stage: mount Apr 17 00:06:59.289108 ignition[967]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:06:59.289108 ignition[967]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:06:59.295739 ignition[967]: INFO : mount: mount passed Apr 17 00:06:59.295739 ignition[967]: INFO : Ignition finished successfully Apr 17 00:06:59.296033 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 00:06:59.300158 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 00:07:00.040750 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 00:07:00.065223 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (979) Apr 17 00:07:00.072382 kernel: BTRFS info (device sda6): first mount of filesystem aa52e89c-0ed3-4175-9a87-dc7b421a671a Apr 17 00:07:00.072528 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 00:07:00.077345 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 17 00:07:00.077370 kernel: BTRFS info (device sda6): turning on async discard Apr 17 00:07:00.081792 kernel: BTRFS info (device sda6): enabling free space tree Apr 17 00:07:00.083878 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 00:07:00.115039 ignition[995]: INFO : Ignition 2.22.0 Apr 17 00:07:00.115039 ignition[995]: INFO : Stage: files Apr 17 00:07:00.116887 ignition[995]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:07:00.116887 ignition[995]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:07:00.116887 ignition[995]: DEBUG : files: compiled without relabeling support, skipping Apr 17 00:07:00.120151 ignition[995]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 00:07:00.120151 ignition[995]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 00:07:00.122861 ignition[995]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 00:07:00.122861 ignition[995]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 00:07:00.125024 ignition[995]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 00:07:00.125024 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 00:07:00.125024 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 00:07:00.123010 unknown[995]: wrote ssh authorized keys file for user: core Apr 17 00:07:00.334481 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 00:07:00.477520 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 00:07:00.479452 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 00:07:00.517746 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 00:07:00.899706 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 00:07:01.328738 ignition[995]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 00:07:01.328738 ignition[995]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 00:07:01.332722 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 00:07:01.332722 ignition[995]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 00:07:01.332722 ignition[995]: INFO : files: files passed Apr 17 00:07:01.332722 ignition[995]: INFO : Ignition finished successfully Apr 17 00:07:01.335161 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 00:07:01.336956 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 00:07:01.349202 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 00:07:01.352804 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 00:07:01.353576 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 00:07:01.367584 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:07:01.367584 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:07:01.370029 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 00:07:01.372544 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 00:07:01.373566 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 00:07:01.375941 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 00:07:01.418167 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 00:07:01.418335 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 00:07:01.420367 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 00:07:01.421602 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 00:07:01.423365 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 00:07:01.424241 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 00:07:01.442412 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 00:07:01.445257 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 00:07:01.462919 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:07:01.463882 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:07:01.465671 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 00:07:01.467245 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 00:07:01.467388 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 00:07:01.469149 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 00:07:01.470203 systemd[1]: Stopped target basic.target - Basic System. Apr 17 00:07:01.471746 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 00:07:01.473227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 00:07:01.474634 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 00:07:01.476473 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 17 00:07:01.478501 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 00:07:01.480282 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 00:07:01.481830 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 00:07:01.483353 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 00:07:01.484981 systemd[1]: Stopped target swap.target - Swaps. Apr 17 00:07:01.486432 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 00:07:01.486583 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 00:07:01.488321 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:07:01.489366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:07:01.490860 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 00:07:01.491127 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:07:01.492426 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 00:07:01.492526 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 00:07:01.494585 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 00:07:01.494740 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 00:07:01.495730 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 00:07:01.495889 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 00:07:01.499186 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 00:07:01.502251 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 00:07:01.503814 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 00:07:01.504871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:07:01.508200 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 00:07:01.508301 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 00:07:01.517915 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 00:07:01.518452 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 00:07:01.549470 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 00:07:01.552098 ignition[1050]: INFO : Ignition 2.22.0 Apr 17 00:07:01.552098 ignition[1050]: INFO : Stage: umount Apr 17 00:07:01.552098 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 00:07:01.552098 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 17 00:07:01.559473 ignition[1050]: INFO : umount: umount passed Apr 17 00:07:01.559473 ignition[1050]: INFO : Ignition finished successfully Apr 17 00:07:01.554705 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 00:07:01.554832 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 00:07:01.557151 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 00:07:01.557253 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 00:07:01.558990 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 00:07:01.559071 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 00:07:01.560296 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 00:07:01.560347 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 00:07:01.561635 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 00:07:01.561701 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 00:07:01.562992 systemd[1]: Stopped target network.target - Network. Apr 17 00:07:01.564310 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 00:07:01.564364 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 00:07:01.566468 systemd[1]: Stopped target paths.target - Path Units. Apr 17 00:07:01.567832 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 00:07:01.572127 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:07:01.572991 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 00:07:01.574397 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 00:07:01.575851 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 00:07:01.575900 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 00:07:01.577429 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 00:07:01.577469 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 00:07:01.579005 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 00:07:01.579065 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 00:07:01.580448 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 00:07:01.580498 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 00:07:01.582042 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 00:07:01.582113 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 00:07:01.583855 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 00:07:01.585407 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 00:07:01.589936 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 00:07:01.590074 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 00:07:01.593840 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 17 00:07:01.595284 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 00:07:01.595401 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 00:07:01.598181 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 17 00:07:01.598538 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 17 00:07:01.599657 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 00:07:01.599701 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:07:01.603153 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 00:07:01.607142 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 00:07:01.607198 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 00:07:01.608645 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 00:07:01.608698 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:07:01.611268 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 00:07:01.611316 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 00:07:01.614668 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 00:07:01.614718 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:07:01.616738 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:07:01.622842 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 17 00:07:01.622925 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:07:01.631452 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 00:07:01.633521 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:07:01.634855 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 00:07:01.634932 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 00:07:01.636173 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 00:07:01.636215 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:07:01.637915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 00:07:01.637968 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 00:07:01.640124 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 00:07:01.640178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 00:07:01.641827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 00:07:01.641893 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 00:07:01.645208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 00:07:01.646167 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 17 00:07:01.646230 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:07:01.648946 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 00:07:01.649000 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:07:01.651708 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 00:07:01.651763 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:07:01.653204 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 00:07:01.653255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:07:01.654873 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 00:07:01.654924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:07:01.657098 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 17 00:07:01.657163 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Apr 17 00:07:01.657208 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 17 00:07:01.657262 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 17 00:07:01.657696 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 00:07:01.657839 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 00:07:01.664424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 00:07:01.664532 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 00:07:01.665750 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 00:07:01.667611 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 00:07:01.685203 systemd[1]: Switching root. Apr 17 00:07:01.723911 systemd-journald[187]: Journal stopped Apr 17 00:07:02.943523 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Apr 17 00:07:02.943555 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 00:07:02.943567 kernel: SELinux: policy capability open_perms=1 Apr 17 00:07:02.943576 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 00:07:02.943585 kernel: SELinux: policy capability always_check_network=0 Apr 17 00:07:02.943597 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 00:07:02.943606 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 00:07:02.943616 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 00:07:02.943625 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 00:07:02.943634 kernel: SELinux: policy capability userspace_initial_context=0 Apr 17 00:07:02.943644 kernel: audit: type=1403 audit(1776384421.877:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 00:07:02.943654 systemd[1]: Successfully loaded SELinux policy in 64.562ms. Apr 17 00:07:02.943667 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.925ms. Apr 17 00:07:02.943678 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 17 00:07:02.943689 systemd[1]: Detected virtualization kvm. Apr 17 00:07:02.943700 systemd[1]: Detected architecture x86-64. Apr 17 00:07:02.943711 systemd[1]: Detected first boot. Apr 17 00:07:02.943722 systemd[1]: Initializing machine ID from random generator. Apr 17 00:07:02.943732 zram_generator::config[1093]: No configuration found. Apr 17 00:07:02.943742 kernel: Guest personality initialized and is inactive Apr 17 00:07:02.943752 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 17 00:07:02.943761 kernel: Initialized host personality Apr 17 00:07:02.943770 kernel: NET: Registered PF_VSOCK protocol family Apr 17 00:07:02.943780 systemd[1]: Populated /etc with preset unit settings. Apr 17 00:07:02.943793 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 17 00:07:02.943804 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 00:07:02.943814 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 00:07:02.943824 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 00:07:02.943834 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 00:07:02.943844 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 00:07:02.943854 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 00:07:02.943867 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 00:07:02.943877 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 00:07:02.943887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 00:07:02.943897 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 00:07:02.943907 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 00:07:02.943918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 00:07:02.943928 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 00:07:02.943938 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 00:07:02.943979 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 00:07:02.943995 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 00:07:02.944006 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 00:07:02.944017 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 00:07:02.944028 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 00:07:02.944038 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 00:07:02.944048 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 00:07:02.944061 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 00:07:02.944071 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 00:07:02.944096 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 00:07:02.944107 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 00:07:02.944117 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 00:07:02.944127 systemd[1]: Reached target slices.target - Slice Units. Apr 17 00:07:02.944138 systemd[1]: Reached target swap.target - Swaps. Apr 17 00:07:02.944148 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 00:07:02.944158 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 00:07:02.944171 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 17 00:07:02.944182 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 00:07:02.944193 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 00:07:02.944203 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 00:07:02.944215 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 00:07:02.944226 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 00:07:02.944236 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 00:07:02.944247 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 00:07:02.944257 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:02.944268 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 00:07:02.944280 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 00:07:02.944290 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 00:07:02.944303 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 00:07:02.944314 systemd[1]: Reached target machines.target - Containers. Apr 17 00:07:02.944324 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 00:07:02.944334 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:07:02.944345 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 00:07:02.944355 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 00:07:02.944366 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:07:02.944376 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 00:07:02.944386 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:07:02.944399 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 00:07:02.944409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:07:02.944420 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 00:07:02.944430 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 00:07:02.944441 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 00:07:02.944451 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 00:07:02.944461 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 00:07:02.946410 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:07:02.946434 kernel: fuse: init (API version 7.41) Apr 17 00:07:02.946446 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 00:07:02.946457 kernel: ACPI: bus type drm_connector registered Apr 17 00:07:02.946467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 00:07:02.946477 kernel: loop: module loaded Apr 17 00:07:02.946487 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 00:07:02.946498 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 00:07:02.946508 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 17 00:07:02.946521 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 00:07:02.946532 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 00:07:02.946543 systemd[1]: Stopped verity-setup.service. Apr 17 00:07:02.946553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:02.946564 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 00:07:02.946574 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 00:07:02.946585 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 00:07:02.946622 systemd-journald[1174]: Collecting audit messages is disabled. Apr 17 00:07:02.946649 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 00:07:02.946660 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 00:07:02.946671 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 00:07:02.946681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 00:07:02.946692 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 00:07:02.946705 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 00:07:02.946716 systemd-journald[1174]: Journal started Apr 17 00:07:02.946736 systemd-journald[1174]: Runtime Journal (/run/log/journal/8ce4c0f9c6994abe8960e7c3cf0a1111) is 8M, max 78.2M, 70.2M free. Apr 17 00:07:02.534703 systemd[1]: Queued start job for default target multi-user.target. Apr 17 00:07:02.546881 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 17 00:07:02.950136 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 00:07:02.547845 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 00:07:02.954114 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 00:07:02.954954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:07:02.955270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:07:02.956334 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 00:07:02.956598 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 00:07:02.957665 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:07:02.957883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:07:02.959094 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 00:07:02.959394 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 00:07:02.960634 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:07:02.960874 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:07:02.962006 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 00:07:02.963206 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 00:07:02.964485 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 00:07:02.965705 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 17 00:07:02.979528 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 00:07:02.983167 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 00:07:02.985160 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 00:07:02.986928 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 00:07:02.987016 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 00:07:02.989881 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 17 00:07:03.000218 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 00:07:03.001695 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:07:03.005211 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 00:07:03.009273 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 00:07:03.010108 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:07:03.012516 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 00:07:03.013375 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 00:07:03.024240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 00:07:03.027174 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 00:07:03.029251 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 00:07:03.038510 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 00:07:03.040373 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 00:07:03.071182 systemd-journald[1174]: Time spent on flushing to /var/log/journal/8ce4c0f9c6994abe8960e7c3cf0a1111 is 78.634ms for 1010 entries. Apr 17 00:07:03.071182 systemd-journald[1174]: System Journal (/var/log/journal/8ce4c0f9c6994abe8960e7c3cf0a1111) is 8M, max 195.6M, 187.6M free. Apr 17 00:07:03.178253 systemd-journald[1174]: Received client request to flush runtime journal. Apr 17 00:07:03.178308 kernel: loop0: detected capacity change from 0 to 128560 Apr 17 00:07:03.178337 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 00:07:03.070095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 00:07:03.079472 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 00:07:03.080897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 00:07:03.088485 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 17 00:07:03.130910 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 00:07:03.157790 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Apr 17 00:07:03.157809 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Apr 17 00:07:03.160571 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 17 00:07:03.180688 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 00:07:03.189656 kernel: loop1: detected capacity change from 0 to 8 Apr 17 00:07:03.189815 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 00:07:03.203428 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 00:07:03.215290 kernel: loop2: detected capacity change from 0 to 110984 Apr 17 00:07:03.260145 kernel: loop3: detected capacity change from 0 to 228704 Apr 17 00:07:03.269526 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 00:07:03.274625 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 00:07:03.313789 kernel: loop4: detected capacity change from 0 to 128560 Apr 17 00:07:03.315605 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Apr 17 00:07:03.315629 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Apr 17 00:07:03.333097 kernel: loop5: detected capacity change from 0 to 8 Apr 17 00:07:03.337866 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 00:07:03.342104 kernel: loop6: detected capacity change from 0 to 110984 Apr 17 00:07:03.365115 kernel: loop7: detected capacity change from 0 to 228704 Apr 17 00:07:03.383466 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 17 00:07:03.385582 (sd-merge)[1246]: Merged extensions into '/usr'. Apr 17 00:07:03.395517 systemd[1]: Reload requested from client PID 1218 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 00:07:03.395533 systemd[1]: Reloading... Apr 17 00:07:03.507150 zram_generator::config[1272]: No configuration found. Apr 17 00:07:03.555126 ldconfig[1213]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 00:07:03.704671 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 00:07:03.704872 systemd[1]: Reloading finished in 308 ms. Apr 17 00:07:03.725131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 00:07:03.726297 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 00:07:03.727498 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 00:07:03.736480 systemd[1]: Starting ensure-sysext.service... Apr 17 00:07:03.740206 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 00:07:03.751578 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 00:07:03.767692 systemd[1]: Reload requested from client PID 1317 ('systemctl') (unit ensure-sysext.service)... Apr 17 00:07:03.767706 systemd[1]: Reloading... Apr 17 00:07:03.768241 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 17 00:07:03.768272 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 17 00:07:03.768586 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 00:07:03.768857 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 00:07:03.769789 systemd-tmpfiles[1318]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 00:07:03.770046 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Apr 17 00:07:03.770146 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Apr 17 00:07:03.778006 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:07:03.778020 systemd-tmpfiles[1318]: Skipping /boot Apr 17 00:07:03.799060 systemd-udevd[1319]: Using default interface naming scheme 'v255'. Apr 17 00:07:03.806659 systemd-tmpfiles[1318]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 00:07:03.806739 systemd-tmpfiles[1318]: Skipping /boot Apr 17 00:07:03.873118 zram_generator::config[1353]: No configuration found. Apr 17 00:07:04.119109 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 00:07:04.154115 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 00:07:04.159412 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 00:07:04.161110 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 00:07:04.161512 systemd[1]: Reloading finished in 393 ms. Apr 17 00:07:04.168112 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 00:07:04.173294 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 00:07:04.175113 kernel: ACPI: button: Power Button [PWRF] Apr 17 00:07:04.175595 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 00:07:04.216063 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:04.221157 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 00:07:04.225283 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 00:07:04.226470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:07:04.232292 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 00:07:04.235109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:07:04.240405 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 00:07:04.241393 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:07:04.241492 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:07:04.244683 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 00:07:04.251151 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 00:07:04.258357 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 00:07:04.272386 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 00:07:04.275287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:04.280264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:07:04.281135 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:07:04.285389 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:04.285647 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:07:04.286025 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:07:04.286919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:07:04.287025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:07:04.287268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:04.303102 kernel: EDAC MC: Ver: 3.0.0 Apr 17 00:07:04.301617 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 00:07:04.307328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:04.307540 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 00:07:04.309737 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 00:07:04.314811 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 00:07:04.315686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 00:07:04.315782 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 17 00:07:04.315894 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 00:07:04.321416 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 00:07:04.321660 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 00:07:04.330515 systemd[1]: Finished ensure-sysext.service. Apr 17 00:07:04.337744 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 00:07:04.342203 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 00:07:04.347457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 00:07:04.355105 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 00:07:04.361694 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 00:07:04.361927 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 00:07:04.373748 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 00:07:04.374841 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 00:07:04.378768 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 00:07:04.386242 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 00:07:04.397635 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 00:07:04.399785 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 00:07:04.403039 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 00:07:04.404372 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 00:07:04.422308 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 00:07:04.425919 augenrules[1484]: No rules Apr 17 00:07:04.424902 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 00:07:04.425155 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 00:07:04.449383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 00:07:04.469629 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 17 00:07:04.475341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 00:07:04.511699 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 00:07:04.569207 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 00:07:04.693272 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 00:07:04.750061 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 00:07:04.750304 systemd-resolved[1450]: Positive Trust Anchors: Apr 17 00:07:04.750313 systemd-resolved[1450]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 00:07:04.750340 systemd-resolved[1450]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 00:07:04.751242 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 00:07:04.758406 systemd-resolved[1450]: Defaulting to hostname 'linux'. Apr 17 00:07:04.760598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 00:07:04.761562 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 00:07:04.762048 systemd-networkd[1449]: lo: Link UP Apr 17 00:07:04.762059 systemd-networkd[1449]: lo: Gained carrier Apr 17 00:07:04.762644 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 00:07:04.763524 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 00:07:04.763833 systemd-networkd[1449]: Enumeration completed Apr 17 00:07:04.764237 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:07:04.764248 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 00:07:04.764720 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 00:07:04.765042 systemd-networkd[1449]: eth0: Link UP Apr 17 00:07:04.765316 systemd-networkd[1449]: eth0: Gained carrier Apr 17 00:07:04.765334 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 00:07:04.765721 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 17 00:07:04.766686 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 00:07:04.789977 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 00:07:04.790966 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 00:07:04.791885 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 00:07:04.791960 systemd[1]: Reached target paths.target - Path Units. Apr 17 00:07:04.792725 systemd[1]: Reached target timers.target - Timer Units. Apr 17 00:07:04.794867 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 00:07:04.797464 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 00:07:04.800296 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 17 00:07:04.801303 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 17 00:07:04.802134 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 17 00:07:04.804855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 00:07:04.806156 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 17 00:07:04.807709 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 00:07:04.808893 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 00:07:04.810288 systemd[1]: Reached target network.target - Network. Apr 17 00:07:04.811199 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 00:07:04.812063 systemd[1]: Reached target basic.target - Basic System. Apr 17 00:07:04.812826 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 00:07:04.812863 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 00:07:04.813843 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 00:07:04.816205 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 00:07:04.819249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 00:07:04.828346 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 00:07:04.832170 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 00:07:04.838201 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 00:07:04.838990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 00:07:04.841257 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 17 00:07:04.847334 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 00:07:04.852685 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 00:07:04.853377 jq[1517]: false Apr 17 00:07:04.857283 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 00:07:04.860901 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 00:07:04.874682 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 00:07:04.878723 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing passwd entry cache Apr 17 00:07:04.879912 oslogin_cache_refresh[1519]: Refreshing passwd entry cache Apr 17 00:07:04.881912 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 17 00:07:04.884256 oslogin_cache_refresh[1519]: Failure getting users, quitting Apr 17 00:07:04.886786 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting users, quitting Apr 17 00:07:04.886786 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 00:07:04.886786 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Refreshing group entry cache Apr 17 00:07:04.886786 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Failure getting groups, quitting Apr 17 00:07:04.886786 google_oslogin_nss_cache[1519]: oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 00:07:04.884271 oslogin_cache_refresh[1519]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 17 00:07:04.884311 oslogin_cache_refresh[1519]: Refreshing group entry cache Apr 17 00:07:04.884778 oslogin_cache_refresh[1519]: Failure getting groups, quitting Apr 17 00:07:04.884788 oslogin_cache_refresh[1519]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 17 00:07:04.888312 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 00:07:04.891011 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 00:07:04.891606 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 00:07:04.899286 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 00:07:04.908233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 00:07:04.920161 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 00:07:04.922300 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 00:07:04.922543 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 00:07:04.922876 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 17 00:07:04.923128 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 17 00:07:04.925275 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 00:07:04.926148 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 00:07:04.931446 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 00:07:04.931678 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 00:07:04.944529 extend-filesystems[1518]: Found /dev/sda6 Apr 17 00:07:04.950322 jq[1541]: true Apr 17 00:07:04.965517 extend-filesystems[1518]: Found /dev/sda9 Apr 17 00:07:04.966476 update_engine[1539]: I20260417 00:07:04.962064 1539 main.cc:92] Flatcar Update Engine starting Apr 17 00:07:04.981169 extend-filesystems[1518]: Checking size of /dev/sda9 Apr 17 00:07:04.996033 (ntainerd)[1554]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 00:07:04.999820 coreos-metadata[1514]: Apr 17 00:07:04.998 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 17 00:07:05.003332 jq[1556]: true Apr 17 00:07:05.006096 tar[1544]: linux-amd64/LICENSE Apr 17 00:07:05.006304 tar[1544]: linux-amd64/helm Apr 17 00:07:05.011880 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 17 00:07:05.015018 dbus-daemon[1515]: [system] SELinux support is enabled Apr 17 00:07:05.015288 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 00:07:05.020050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 00:07:05.020117 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 00:07:05.022174 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 00:07:05.022199 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 00:07:05.025534 extend-filesystems[1518]: Resized partition /dev/sda9 Apr 17 00:07:05.035473 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Apr 17 00:07:05.043615 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 17 00:07:05.049519 systemd[1]: Started update-engine.service - Update Engine. Apr 17 00:07:05.050663 update_engine[1539]: I20260417 00:07:05.050617 1539 update_check_scheduler.cc:74] Next update check in 2m54s Apr 17 00:07:05.068471 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 00:07:05.124262 systemd-logind[1526]: Watching system buttons on /dev/input/event2 (Power Button) Apr 17 00:07:05.124299 systemd-logind[1526]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 00:07:05.127418 systemd-logind[1526]: New seat seat0. Apr 17 00:07:05.131491 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 00:07:05.163626 bash[1585]: Updated "/home/core/.ssh/authorized_keys" Apr 17 00:07:05.165357 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 00:07:05.170359 systemd[1]: Starting sshkeys.service... Apr 17 00:07:05.220992 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 00:07:05.226411 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 00:07:05.326554 coreos-metadata[1593]: Apr 17 00:07:05.323 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 17 00:07:05.342101 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 17 00:07:05.357060 containerd[1554]: time="2026-04-17T00:07:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 17 00:07:05.361318 extend-filesystems[1569]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 17 00:07:05.361318 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 17 00:07:05.361318 extend-filesystems[1569]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 17 00:07:05.376142 extend-filesystems[1518]: Resized filesystem in /dev/sda9 Apr 17 00:07:05.365768 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 00:07:05.378663 containerd[1554]: time="2026-04-17T00:07:05.364631096Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 17 00:07:05.366055 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 00:07:05.396525 sshd_keygen[1542]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 00:07:05.399281 containerd[1554]: time="2026-04-17T00:07:05.399200128Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="122.171µs" Apr 17 00:07:05.399281 containerd[1554]: time="2026-04-17T00:07:05.399232188Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 17 00:07:05.399281 containerd[1554]: time="2026-04-17T00:07:05.399251348Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399406448Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399427378Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399451668Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399523228Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399538368Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399787979Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399802969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399821809Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399856489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401113 containerd[1554]: time="2026-04-17T00:07:05.399938919Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401758 containerd[1554]: time="2026-04-17T00:07:05.401476471Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401758 containerd[1554]: time="2026-04-17T00:07:05.401514971Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 17 00:07:05.401758 containerd[1554]: time="2026-04-17T00:07:05.401529831Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 17 00:07:05.401758 containerd[1554]: time="2026-04-17T00:07:05.401561071Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 17 00:07:05.401758 containerd[1554]: time="2026-04-17T00:07:05.401744702Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 17 00:07:05.401857 containerd[1554]: time="2026-04-17T00:07:05.401804582Z" level=info msg="metadata content store policy set" policy=shared Apr 17 00:07:05.404754 containerd[1554]: time="2026-04-17T00:07:05.404729316Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 17 00:07:05.404797 containerd[1554]: time="2026-04-17T00:07:05.404781146Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 17 00:07:05.404828 containerd[1554]: time="2026-04-17T00:07:05.404795296Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 17 00:07:05.404828 containerd[1554]: time="2026-04-17T00:07:05.404805836Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 17 00:07:05.404862 containerd[1554]: time="2026-04-17T00:07:05.404816226Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 17 00:07:05.404879 containerd[1554]: time="2026-04-17T00:07:05.404870186Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 17 00:07:05.404897 containerd[1554]: time="2026-04-17T00:07:05.404884256Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 17 00:07:05.404921 containerd[1554]: time="2026-04-17T00:07:05.404895096Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 17 00:07:05.404921 containerd[1554]: time="2026-04-17T00:07:05.404904726Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 17 00:07:05.404921 containerd[1554]: time="2026-04-17T00:07:05.404913926Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 17 00:07:05.404977 containerd[1554]: time="2026-04-17T00:07:05.404922456Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 17 00:07:05.404977 containerd[1554]: time="2026-04-17T00:07:05.404933266Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405040186Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405067156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405779938Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405799888Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405869438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405879038Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 17 00:07:05.406103 containerd[1554]: time="2026-04-17T00:07:05.405888748Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 17 00:07:05.406244 containerd[1554]: time="2026-04-17T00:07:05.406208058Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 17 00:07:05.406244 containerd[1554]: time="2026-04-17T00:07:05.406224818Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 17 00:07:05.406244 containerd[1554]: time="2026-04-17T00:07:05.406234458Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 17 00:07:05.406294 containerd[1554]: time="2026-04-17T00:07:05.406244678Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 17 00:07:05.406294 containerd[1554]: time="2026-04-17T00:07:05.406283468Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 17 00:07:05.406336 containerd[1554]: time="2026-04-17T00:07:05.406295758Z" level=info msg="Start snapshots syncer" Apr 17 00:07:05.407296 containerd[1554]: time="2026-04-17T00:07:05.407270160Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 17 00:07:05.407629 containerd[1554]: time="2026-04-17T00:07:05.407594310Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 17 00:07:05.407724 containerd[1554]: time="2026-04-17T00:07:05.407647630Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 17 00:07:05.407724 containerd[1554]: time="2026-04-17T00:07:05.407690950Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407804051Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407826951Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407836311Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407846261Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407857401Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407866791Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407876171Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407899071Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407908591Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 17 00:07:05.407943 containerd[1554]: time="2026-04-17T00:07:05.407917521Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.407955651Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.407968311Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.407975561Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.407984181Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.407991251Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.407999981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.408014221Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.408029031Z" level=info msg="runtime interface created" Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.408034231Z" level=info msg="created NRI interface" Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.408045651Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.408055321Z" level=info msg="Connect containerd service" Apr 17 00:07:05.408146 containerd[1554]: time="2026-04-17T00:07:05.408070501Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 00:07:05.413539 containerd[1554]: time="2026-04-17T00:07:05.413460509Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 00:07:05.429127 locksmithd[1570]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 00:07:05.444979 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 00:07:05.450322 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 00:07:05.475939 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 00:07:05.476451 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 00:07:05.483333 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 00:07:05.523119 containerd[1554]: time="2026-04-17T00:07:05.523047383Z" level=info msg="Start subscribing containerd event" Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524347495Z" level=info msg="Start recovering state" Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524450396Z" level=info msg="Start event monitor" Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524463176Z" level=info msg="Start cni network conf syncer for default" Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524469986Z" level=info msg="Start streaming server" Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524478056Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524484716Z" level=info msg="runtime interface starting up..." Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524490196Z" level=info msg="starting plugins..." Apr 17 00:07:05.524676 containerd[1554]: time="2026-04-17T00:07:05.524503266Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 17 00:07:05.525508 containerd[1554]: time="2026-04-17T00:07:05.525324277Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 00:07:05.526392 containerd[1554]: time="2026-04-17T00:07:05.525627917Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 00:07:05.526684 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 00:07:05.530717 containerd[1554]: time="2026-04-17T00:07:05.530700765Z" level=info msg="containerd successfully booted in 0.176233s" Apr 17 00:07:05.531499 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 00:07:05.535635 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 00:07:05.537827 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 00:07:05.539532 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 00:07:05.544472 systemd-networkd[1449]: eth0: DHCPv4 address 172.238.171.230/24, gateway 172.238.171.1 acquired from 23.205.167.124 Apr 17 00:07:05.546360 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1449 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 00:07:05.550320 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Apr 17 00:07:05.552234 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 00:07:05.641318 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 00:07:05.642819 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 00:07:05.644580 dbus-daemon[1515]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1635 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 00:07:05.650823 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 00:07:05.676846 tar[1544]: linux-amd64/README.md Apr 17 00:07:05.694261 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 00:07:05.742054 polkitd[1636]: Started polkitd version 126 Apr 17 00:07:05.746100 polkitd[1636]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 00:07:05.746353 polkitd[1636]: Loading rules from directory /run/polkit-1/rules.d Apr 17 00:07:05.746393 polkitd[1636]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 17 00:07:05.746598 polkitd[1636]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 17 00:07:05.746618 polkitd[1636]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 17 00:07:05.746654 polkitd[1636]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 00:07:05.747501 polkitd[1636]: Finished loading, compiling and executing 2 rules Apr 17 00:07:05.748213 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 00:07:05.748317 dbus-daemon[1515]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 00:07:05.748609 polkitd[1636]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 00:07:05.759177 systemd-resolved[1450]: System hostname changed to '172-238-171-230'. Apr 17 00:07:05.759420 systemd-hostnamed[1635]: Hostname set to <172-238-171-230> (transient) Apr 17 00:07:06.010612 coreos-metadata[1514]: Apr 17 00:07:06.010 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 17 00:07:06.102537 coreos-metadata[1514]: Apr 17 00:07:06.102 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 17 00:07:06.291765 coreos-metadata[1514]: Apr 17 00:07:06.291 INFO Fetch successful Apr 17 00:07:06.292172 coreos-metadata[1514]: Apr 17 00:07:06.292 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 17 00:07:06.333107 coreos-metadata[1593]: Apr 17 00:07:06.333 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 17 00:07:06.365270 systemd-networkd[1449]: eth0: Gained IPv6LL Apr 17 00:07:06.365985 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Apr 17 00:07:06.370938 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 00:07:06.372798 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 00:07:06.375993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:06.380462 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 00:07:06.407939 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 00:07:06.425166 coreos-metadata[1593]: Apr 17 00:07:06.424 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 17 00:07:06.564276 coreos-metadata[1593]: Apr 17 00:07:06.563 INFO Fetch successful Apr 17 00:07:06.587737 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Apr 17 00:07:06.588323 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 00:07:06.590704 systemd[1]: Finished sshkeys.service. Apr 17 00:07:06.650104 coreos-metadata[1514]: Apr 17 00:07:06.650 INFO Fetch successful Apr 17 00:07:06.757155 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 00:07:06.759007 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 00:07:07.299581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:07.300818 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 00:07:07.309147 systemd[1]: Startup finished in 2.986s (kernel) + 8.220s (initrd) + 5.494s (userspace) = 16.701s. Apr 17 00:07:07.366554 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:07:07.867464 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Apr 17 00:07:07.914944 kubelet[1692]: E0417 00:07:07.914874 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:07:07.918416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:07:07.918670 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:07:07.919180 systemd[1]: kubelet.service: Consumed 891ms CPU time, 267.6M memory peak. Apr 17 00:07:08.556726 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 00:07:08.558622 systemd[1]: Started sshd@0-172.238.171.230:22-20.229.252.112:54604.service - OpenSSH per-connection server daemon (20.229.252.112:54604). Apr 17 00:07:09.054057 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Apr 17 00:07:09.089075 sshd[1704]: Accepted publickey for core from 20.229.252.112 port 54604 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:09.092005 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:09.099448 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 00:07:09.101465 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 00:07:09.111334 systemd-logind[1526]: New session 1 of user core. Apr 17 00:07:09.120273 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 00:07:09.123731 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 00:07:09.138759 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 00:07:09.141770 systemd-logind[1526]: New session c1 of user core. Apr 17 00:07:09.277821 systemd[1709]: Queued start job for default target default.target. Apr 17 00:07:09.284664 systemd[1709]: Created slice app.slice - User Application Slice. Apr 17 00:07:09.284695 systemd[1709]: Reached target paths.target - Paths. Apr 17 00:07:09.284827 systemd[1709]: Reached target timers.target - Timers. Apr 17 00:07:09.286448 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 00:07:09.298389 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 00:07:09.298512 systemd[1709]: Reached target sockets.target - Sockets. Apr 17 00:07:09.298553 systemd[1709]: Reached target basic.target - Basic System. Apr 17 00:07:09.298601 systemd[1709]: Reached target default.target - Main User Target. Apr 17 00:07:09.298635 systemd[1709]: Startup finished in 150ms. Apr 17 00:07:09.298874 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 00:07:09.311252 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 00:07:09.617035 systemd[1]: Started sshd@1-172.238.171.230:22-20.229.252.112:54608.service - OpenSSH per-connection server daemon (20.229.252.112:54608). Apr 17 00:07:10.139018 sshd[1720]: Accepted publickey for core from 20.229.252.112 port 54608 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:10.140437 sshd-session[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:10.146362 systemd-logind[1526]: New session 2 of user core. Apr 17 00:07:10.156276 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 00:07:10.431907 sshd[1723]: Connection closed by 20.229.252.112 port 54608 Apr 17 00:07:10.433299 sshd-session[1720]: pam_unix(sshd:session): session closed for user core Apr 17 00:07:10.436893 systemd[1]: sshd@1-172.238.171.230:22-20.229.252.112:54608.service: Deactivated successfully. Apr 17 00:07:10.439360 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 00:07:10.440292 systemd-logind[1526]: Session 2 logged out. Waiting for processes to exit. Apr 17 00:07:10.443265 systemd-logind[1526]: Removed session 2. Apr 17 00:07:10.538015 systemd[1]: Started sshd@2-172.238.171.230:22-20.229.252.112:54610.service - OpenSSH per-connection server daemon (20.229.252.112:54610). Apr 17 00:07:11.054791 sshd[1729]: Accepted publickey for core from 20.229.252.112 port 54610 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:11.056379 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:11.061614 systemd-logind[1526]: New session 3 of user core. Apr 17 00:07:11.073265 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 00:07:11.342339 sshd[1732]: Connection closed by 20.229.252.112 port 54610 Apr 17 00:07:11.344167 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Apr 17 00:07:11.347812 systemd[1]: sshd@2-172.238.171.230:22-20.229.252.112:54610.service: Deactivated successfully. Apr 17 00:07:11.350100 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 00:07:11.350839 systemd-logind[1526]: Session 3 logged out. Waiting for processes to exit. Apr 17 00:07:11.352460 systemd-logind[1526]: Removed session 3. Apr 17 00:07:11.449980 systemd[1]: Started sshd@3-172.238.171.230:22-20.229.252.112:54618.service - OpenSSH per-connection server daemon (20.229.252.112:54618). Apr 17 00:07:11.979181 sshd[1738]: Accepted publickey for core from 20.229.252.112 port 54618 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:11.980892 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:11.986770 systemd-logind[1526]: New session 4 of user core. Apr 17 00:07:11.996230 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 00:07:12.275321 sshd[1741]: Connection closed by 20.229.252.112 port 54618 Apr 17 00:07:12.275813 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Apr 17 00:07:12.279894 systemd-logind[1526]: Session 4 logged out. Waiting for processes to exit. Apr 17 00:07:12.280727 systemd[1]: sshd@3-172.238.171.230:22-20.229.252.112:54618.service: Deactivated successfully. Apr 17 00:07:12.282637 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 00:07:12.284488 systemd-logind[1526]: Removed session 4. Apr 17 00:07:12.383693 systemd[1]: Started sshd@4-172.238.171.230:22-20.229.252.112:54626.service - OpenSSH per-connection server daemon (20.229.252.112:54626). Apr 17 00:07:12.903985 sshd[1747]: Accepted publickey for core from 20.229.252.112 port 54626 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:12.905671 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:12.912294 systemd-logind[1526]: New session 5 of user core. Apr 17 00:07:12.917246 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 00:07:13.110077 sudo[1751]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 00:07:13.110474 sudo[1751]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:07:13.124818 sudo[1751]: pam_unix(sudo:session): session closed for user root Apr 17 00:07:13.222078 sshd[1750]: Connection closed by 20.229.252.112 port 54626 Apr 17 00:07:13.223756 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Apr 17 00:07:13.227595 systemd-logind[1526]: Session 5 logged out. Waiting for processes to exit. Apr 17 00:07:13.228553 systemd[1]: sshd@4-172.238.171.230:22-20.229.252.112:54626.service: Deactivated successfully. Apr 17 00:07:13.230939 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 00:07:13.232634 systemd-logind[1526]: Removed session 5. Apr 17 00:07:13.327702 systemd[1]: Started sshd@5-172.238.171.230:22-20.229.252.112:54634.service - OpenSSH per-connection server daemon (20.229.252.112:54634). Apr 17 00:07:13.850105 sshd[1757]: Accepted publickey for core from 20.229.252.112 port 54634 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:13.851968 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:13.857678 systemd-logind[1526]: New session 6 of user core. Apr 17 00:07:13.863256 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 00:07:14.048274 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 00:07:14.048605 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:07:14.052457 sudo[1762]: pam_unix(sudo:session): session closed for user root Apr 17 00:07:14.058105 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 17 00:07:14.058427 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:07:14.068420 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 17 00:07:14.106809 augenrules[1784]: No rules Apr 17 00:07:14.107723 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 00:07:14.108060 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 17 00:07:14.109816 sudo[1761]: pam_unix(sudo:session): session closed for user root Apr 17 00:07:14.205874 sshd[1760]: Connection closed by 20.229.252.112 port 54634 Apr 17 00:07:14.207329 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Apr 17 00:07:14.211408 systemd-logind[1526]: Session 6 logged out. Waiting for processes to exit. Apr 17 00:07:14.212367 systemd[1]: sshd@5-172.238.171.230:22-20.229.252.112:54634.service: Deactivated successfully. Apr 17 00:07:14.214267 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 00:07:14.216282 systemd-logind[1526]: Removed session 6. Apr 17 00:07:14.314944 systemd[1]: Started sshd@6-172.238.171.230:22-20.229.252.112:51770.service - OpenSSH per-connection server daemon (20.229.252.112:51770). Apr 17 00:07:14.831304 sshd[1793]: Accepted publickey for core from 20.229.252.112 port 51770 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:07:14.833052 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:07:14.838406 systemd-logind[1526]: New session 7 of user core. Apr 17 00:07:14.842215 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 00:07:15.029667 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 00:07:15.030144 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 00:07:15.318587 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 00:07:15.326566 (dockerd)[1815]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 00:07:15.538453 dockerd[1815]: time="2026-04-17T00:07:15.538198835Z" level=info msg="Starting up" Apr 17 00:07:15.539711 dockerd[1815]: time="2026-04-17T00:07:15.539628037Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 17 00:07:15.550520 dockerd[1815]: time="2026-04-17T00:07:15.550482414Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 17 00:07:15.590732 dockerd[1815]: time="2026-04-17T00:07:15.590367323Z" level=info msg="Loading containers: start." Apr 17 00:07:15.602101 kernel: Initializing XFRM netlink socket Apr 17 00:07:15.808825 systemd-timesyncd[1462]: Network configuration changed, trying to establish connection. Apr 17 00:07:15.859603 systemd-networkd[1449]: docker0: Link UP Apr 17 00:07:15.867401 dockerd[1815]: time="2026-04-17T00:07:15.867343099Z" level=info msg="Loading containers: done." Apr 17 00:07:15.887111 dockerd[1815]: time="2026-04-17T00:07:15.886130797Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 00:07:15.887111 dockerd[1815]: time="2026-04-17T00:07:15.886267687Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 17 00:07:15.887111 dockerd[1815]: time="2026-04-17T00:07:15.886358037Z" level=info msg="Initializing buildkit" Apr 17 00:07:15.886586 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck710291914-merged.mount: Deactivated successfully. Apr 17 00:07:15.909068 dockerd[1815]: time="2026-04-17T00:07:15.908975971Z" level=info msg="Completed buildkit initialization" Apr 17 00:07:15.916736 dockerd[1815]: time="2026-04-17T00:07:15.916613053Z" level=info msg="Daemon has completed initialization" Apr 17 00:07:15.916856 dockerd[1815]: time="2026-04-17T00:07:15.916724643Z" level=info msg="API listen on /run/docker.sock" Apr 17 00:07:15.917259 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 00:07:16.757675 systemd-timesyncd[1462]: Contacted time server [2600:3c05::f03c:94ff:fe24:8b40]:123 (2.flatcar.pool.ntp.org). Apr 17 00:07:16.757775 systemd-timesyncd[1462]: Initial clock synchronization to Fri 2026-04-17 00:07:16.757335 UTC. Apr 17 00:07:16.758499 systemd-resolved[1450]: Clock change detected. Flushing caches. Apr 17 00:07:17.373689 containerd[1554]: time="2026-04-17T00:07:17.373649029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 00:07:17.980615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337476768.mount: Deactivated successfully. Apr 17 00:07:18.812629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 00:07:18.816182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:19.020199 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:19.028542 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 00:07:19.071552 kubelet[2090]: E0417 00:07:19.071456 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 00:07:19.076780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 00:07:19.077418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 00:07:19.079165 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.7M memory peak. Apr 17 00:07:19.347596 containerd[1554]: time="2026-04-17T00:07:19.347493160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:19.348604 containerd[1554]: time="2026-04-17T00:07:19.348364871Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193995" Apr 17 00:07:19.348941 containerd[1554]: time="2026-04-17T00:07:19.348901622Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:19.350979 containerd[1554]: time="2026-04-17T00:07:19.350937515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:19.352191 containerd[1554]: time="2026-04-17T00:07:19.351862106Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.978175637s" Apr 17 00:07:19.352191 containerd[1554]: time="2026-04-17T00:07:19.351888956Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 00:07:19.352426 containerd[1554]: time="2026-04-17T00:07:19.352341937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 00:07:20.733192 containerd[1554]: time="2026-04-17T00:07:20.733125258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:20.734227 containerd[1554]: time="2026-04-17T00:07:20.733971819Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171453" Apr 17 00:07:20.734824 containerd[1554]: time="2026-04-17T00:07:20.734786801Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:20.736958 containerd[1554]: time="2026-04-17T00:07:20.736927744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:20.738121 containerd[1554]: time="2026-04-17T00:07:20.738083866Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.385518469s" Apr 17 00:07:20.738197 containerd[1554]: time="2026-04-17T00:07:20.738183556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 00:07:20.738846 containerd[1554]: time="2026-04-17T00:07:20.738820567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 00:07:21.930585 containerd[1554]: time="2026-04-17T00:07:21.930502344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:21.931604 containerd[1554]: time="2026-04-17T00:07:21.931580796Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289762" Apr 17 00:07:21.932292 containerd[1554]: time="2026-04-17T00:07:21.932256837Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:21.934657 containerd[1554]: time="2026-04-17T00:07:21.934611100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:21.935592 containerd[1554]: time="2026-04-17T00:07:21.935564452Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.196716035s" Apr 17 00:07:21.935642 containerd[1554]: time="2026-04-17T00:07:21.935595752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 00:07:21.936029 containerd[1554]: time="2026-04-17T00:07:21.935972762Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 00:07:22.989614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708326647.mount: Deactivated successfully. Apr 17 00:07:23.339128 containerd[1554]: time="2026-04-17T00:07:23.339015457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:23.340132 containerd[1554]: time="2026-04-17T00:07:23.339731398Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010717" Apr 17 00:07:23.340794 containerd[1554]: time="2026-04-17T00:07:23.340747329Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:23.342736 containerd[1554]: time="2026-04-17T00:07:23.342692552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:23.343101 containerd[1554]: time="2026-04-17T00:07:23.343070563Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.407071781s" Apr 17 00:07:23.343136 containerd[1554]: time="2026-04-17T00:07:23.343102643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 00:07:23.343968 containerd[1554]: time="2026-04-17T00:07:23.343886934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 00:07:23.858915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248034491.mount: Deactivated successfully. Apr 17 00:07:24.585429 containerd[1554]: time="2026-04-17T00:07:24.585362116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:24.586628 containerd[1554]: time="2026-04-17T00:07:24.586601878Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Apr 17 00:07:24.587344 containerd[1554]: time="2026-04-17T00:07:24.587315869Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:24.589757 containerd[1554]: time="2026-04-17T00:07:24.589729433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:24.590676 containerd[1554]: time="2026-04-17T00:07:24.590653824Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.24671529s" Apr 17 00:07:24.590759 containerd[1554]: time="2026-04-17T00:07:24.590744144Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 00:07:24.591399 containerd[1554]: time="2026-04-17T00:07:24.591370895Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 00:07:25.088687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2120155080.mount: Deactivated successfully. Apr 17 00:07:25.093289 containerd[1554]: time="2026-04-17T00:07:25.093253088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:07:25.094074 containerd[1554]: time="2026-04-17T00:07:25.093916129Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 17 00:07:25.094884 containerd[1554]: time="2026-04-17T00:07:25.094849330Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:07:25.096895 containerd[1554]: time="2026-04-17T00:07:25.096833133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 00:07:25.097642 containerd[1554]: time="2026-04-17T00:07:25.097497774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 505.996079ms" Apr 17 00:07:25.097642 containerd[1554]: time="2026-04-17T00:07:25.097524014Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 00:07:25.097958 containerd[1554]: time="2026-04-17T00:07:25.097917225Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 00:07:25.621476 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4234606202.mount: Deactivated successfully. Apr 17 00:07:26.413935 containerd[1554]: time="2026-04-17T00:07:26.413858099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:26.415178 containerd[1554]: time="2026-04-17T00:07:26.415154701Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719432" Apr 17 00:07:26.415583 containerd[1554]: time="2026-04-17T00:07:26.415544391Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:26.419078 containerd[1554]: time="2026-04-17T00:07:26.418956986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:26.420270 containerd[1554]: time="2026-04-17T00:07:26.419810377Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.321852182s" Apr 17 00:07:26.420270 containerd[1554]: time="2026-04-17T00:07:26.419836278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 00:07:29.312734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 00:07:29.316227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:29.485202 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 00:07:29.485313 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 00:07:29.485732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:29.486259 systemd[1]: kubelet.service: Consumed 137ms CPU time, 98.2M memory peak. Apr 17 00:07:29.494669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:29.516786 systemd[1]: Reload requested from client PID 2265 ('systemctl') (unit session-7.scope)... Apr 17 00:07:29.516806 systemd[1]: Reloading... Apr 17 00:07:29.653101 zram_generator::config[2313]: No configuration found. Apr 17 00:07:29.866406 systemd[1]: Reloading finished in 349 ms. Apr 17 00:07:29.931739 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 00:07:29.931877 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 00:07:29.932529 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:29.932643 systemd[1]: kubelet.service: Consumed 153ms CPU time, 98.3M memory peak. Apr 17 00:07:29.934867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:30.110295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:30.118365 (kubelet)[2364]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:07:30.155618 kubelet[2364]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:07:30.155618 kubelet[2364]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 00:07:30.155618 kubelet[2364]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:07:30.156135 kubelet[2364]: I0417 00:07:30.155662 2364 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 00:07:31.036979 kubelet[2364]: I0417 00:07:31.036409 2364 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 00:07:31.036979 kubelet[2364]: I0417 00:07:31.036448 2364 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 00:07:31.036979 kubelet[2364]: I0417 00:07:31.036851 2364 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 00:07:31.075228 kubelet[2364]: E0417 00:07:31.075174 2364 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.238.171.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.238.171.230:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 00:07:31.077388 kubelet[2364]: I0417 00:07:31.077245 2364 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 00:07:31.085419 kubelet[2364]: I0417 00:07:31.085389 2364 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 00:07:31.089238 kubelet[2364]: I0417 00:07:31.089219 2364 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 00:07:31.090058 kubelet[2364]: I0417 00:07:31.090004 2364 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 00:07:31.090233 kubelet[2364]: I0417 00:07:31.090057 2364 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-171-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 00:07:31.090402 kubelet[2364]: I0417 00:07:31.090236 2364 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 00:07:31.090402 kubelet[2364]: I0417 00:07:31.090246 2364 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 00:07:31.090402 kubelet[2364]: I0417 00:07:31.090371 2364 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:07:31.096276 kubelet[2364]: I0417 00:07:31.096090 2364 kubelet.go:480] "Attempting to sync node with API server" Apr 17 00:07:31.096276 kubelet[2364]: I0417 00:07:31.096150 2364 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 00:07:31.096276 kubelet[2364]: I0417 00:07:31.096247 2364 kubelet.go:386] "Adding apiserver pod source" Apr 17 00:07:31.098503 kubelet[2364]: I0417 00:07:31.098325 2364 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 00:07:31.107991 kubelet[2364]: E0417 00:07:31.107700 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.238.171.230:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-238-171-230&limit=500&resourceVersion=0\": dial tcp 172.238.171.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 00:07:31.108185 kubelet[2364]: E0417 00:07:31.108155 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.238.171.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.238.171.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 00:07:31.108270 kubelet[2364]: I0417 00:07:31.108254 2364 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 00:07:31.109761 kubelet[2364]: I0417 00:07:31.109725 2364 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 00:07:31.112062 kubelet[2364]: W0417 00:07:31.111590 2364 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 00:07:31.119346 kubelet[2364]: I0417 00:07:31.119313 2364 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 00:07:31.119414 kubelet[2364]: I0417 00:07:31.119368 2364 server.go:1289] "Started kubelet" Apr 17 00:07:31.119620 kubelet[2364]: I0417 00:07:31.119496 2364 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 00:07:31.121519 kubelet[2364]: I0417 00:07:31.121500 2364 server.go:317] "Adding debug handlers to kubelet server" Apr 17 00:07:31.125426 kubelet[2364]: I0417 00:07:31.125358 2364 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 00:07:31.125973 kubelet[2364]: I0417 00:07:31.125774 2364 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 00:07:31.127409 kubelet[2364]: E0417 00:07:31.125885 2364 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.238.171.230:6443/api/v1/namespaces/default/events\": dial tcp 172.238.171.230:6443: connect: connection refused" event="&Event{ObjectMeta:{172-238-171-230.18a6fc3f217eb6aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-238-171-230,UID:172-238-171-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-238-171-230,},FirstTimestamp:2026-04-17 00:07:31.119330986 +0000 UTC m=+0.996199155,LastTimestamp:2026-04-17 00:07:31.119330986 +0000 UTC m=+0.996199155,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-238-171-230,}" Apr 17 00:07:31.130505 kubelet[2364]: E0417 00:07:31.130472 2364 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 00:07:31.132473 kubelet[2364]: I0417 00:07:31.130823 2364 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 00:07:31.132473 kubelet[2364]: I0417 00:07:31.130946 2364 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 00:07:31.132473 kubelet[2364]: I0417 00:07:31.131006 2364 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 00:07:31.133486 kubelet[2364]: I0417 00:07:31.133473 2364 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 00:07:31.133592 kubelet[2364]: I0417 00:07:31.133580 2364 reconciler.go:26] "Reconciler: start to sync state" Apr 17 00:07:31.135355 kubelet[2364]: E0417 00:07:31.135328 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.238.171.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.238.171.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 00:07:31.137124 kubelet[2364]: E0417 00:07:31.137105 2364 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-238-171-230\" not found" Apr 17 00:07:31.137665 kubelet[2364]: I0417 00:07:31.137647 2364 factory.go:223] Registration of the systemd container factory successfully Apr 17 00:07:31.137922 kubelet[2364]: I0417 00:07:31.137904 2364 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 00:07:31.138546 kubelet[2364]: E0417 00:07:31.138513 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-230?timeout=10s\": dial tcp 172.238.171.230:6443: connect: connection refused" interval="200ms" Apr 17 00:07:31.141065 kubelet[2364]: I0417 00:07:31.139751 2364 factory.go:223] Registration of the containerd container factory successfully Apr 17 00:07:31.156128 kubelet[2364]: I0417 00:07:31.156070 2364 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 00:07:31.156536 kubelet[2364]: I0417 00:07:31.156522 2364 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 00:07:31.156764 kubelet[2364]: I0417 00:07:31.156753 2364 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:07:31.159556 kubelet[2364]: I0417 00:07:31.159544 2364 policy_none.go:49] "None policy: Start" Apr 17 00:07:31.159642 kubelet[2364]: I0417 00:07:31.159631 2364 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 00:07:31.159714 kubelet[2364]: I0417 00:07:31.159706 2364 state_mem.go:35] "Initializing new in-memory state store" Apr 17 00:07:31.172449 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 00:07:31.175119 kubelet[2364]: I0417 00:07:31.175021 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 00:07:31.178161 kubelet[2364]: I0417 00:07:31.178134 2364 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 00:07:31.178278 kubelet[2364]: I0417 00:07:31.178262 2364 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 00:07:31.179178 kubelet[2364]: I0417 00:07:31.179161 2364 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 00:07:31.179331 kubelet[2364]: I0417 00:07:31.179320 2364 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 00:07:31.179516 kubelet[2364]: E0417 00:07:31.179496 2364 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:07:31.181441 kubelet[2364]: E0417 00:07:31.181401 2364 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.238.171.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.238.171.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 00:07:31.186676 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 00:07:31.192453 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 00:07:31.201341 kubelet[2364]: E0417 00:07:31.201318 2364 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 00:07:31.202926 kubelet[2364]: I0417 00:07:31.202907 2364 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 00:07:31.204108 kubelet[2364]: I0417 00:07:31.203197 2364 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 00:07:31.204108 kubelet[2364]: I0417 00:07:31.203915 2364 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 00:07:31.205959 kubelet[2364]: E0417 00:07:31.205919 2364 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 00:07:31.206079 kubelet[2364]: E0417 00:07:31.205982 2364 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-238-171-230\" not found" Apr 17 00:07:31.293111 systemd[1]: Created slice kubepods-burstable-pod3ba9cdf9afc88f9226def4559996a3a2.slice - libcontainer container kubepods-burstable-pod3ba9cdf9afc88f9226def4559996a3a2.slice. Apr 17 00:07:31.305176 kubelet[2364]: I0417 00:07:31.305142 2364 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-230" Apr 17 00:07:31.305465 kubelet[2364]: E0417 00:07:31.305438 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.171.230:6443/api/v1/nodes\": dial tcp 172.238.171.230:6443: connect: connection refused" node="172-238-171-230" Apr 17 00:07:31.307143 kubelet[2364]: E0417 00:07:31.307112 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:31.309452 systemd[1]: Created slice kubepods-burstable-pod6ccc3dc7d0eb9b2e10238c08d50fc902.slice - libcontainer container kubepods-burstable-pod6ccc3dc7d0eb9b2e10238c08d50fc902.slice. Apr 17 00:07:31.317561 kubelet[2364]: E0417 00:07:31.317525 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:31.321135 systemd[1]: Created slice kubepods-burstable-pod45f450fa5d2d8d6547351084f10f90bc.slice - libcontainer container kubepods-burstable-pod45f450fa5d2d8d6547351084f10f90bc.slice. Apr 17 00:07:31.323519 kubelet[2364]: E0417 00:07:31.323473 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:31.340024 kubelet[2364]: E0417 00:07:31.339975 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-230?timeout=10s\": dial tcp 172.238.171.230:6443: connect: connection refused" interval="400ms" Apr 17 00:07:31.435448 kubelet[2364]: I0417 00:07:31.435161 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ba9cdf9afc88f9226def4559996a3a2-ca-certs\") pod \"kube-apiserver-172-238-171-230\" (UID: \"3ba9cdf9afc88f9226def4559996a3a2\") " pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:31.435448 kubelet[2364]: I0417 00:07:31.435206 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ba9cdf9afc88f9226def4559996a3a2-k8s-certs\") pod \"kube-apiserver-172-238-171-230\" (UID: \"3ba9cdf9afc88f9226def4559996a3a2\") " pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:31.435448 kubelet[2364]: I0417 00:07:31.435235 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ba9cdf9afc88f9226def4559996a3a2-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-171-230\" (UID: \"3ba9cdf9afc88f9226def4559996a3a2\") " pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:31.435448 kubelet[2364]: I0417 00:07:31.435255 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-k8s-certs\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:31.435448 kubelet[2364]: I0417 00:07:31.435273 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-ca-certs\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:31.435670 kubelet[2364]: I0417 00:07:31.435289 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-flexvolume-dir\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:31.435670 kubelet[2364]: I0417 00:07:31.435305 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-kubeconfig\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:31.435670 kubelet[2364]: I0417 00:07:31.435322 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:31.435670 kubelet[2364]: I0417 00:07:31.435339 2364 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45f450fa5d2d8d6547351084f10f90bc-kubeconfig\") pod \"kube-scheduler-172-238-171-230\" (UID: \"45f450fa5d2d8d6547351084f10f90bc\") " pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:31.507316 kubelet[2364]: I0417 00:07:31.507255 2364 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-230" Apr 17 00:07:31.507537 kubelet[2364]: E0417 00:07:31.507510 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.171.230:6443/api/v1/nodes\": dial tcp 172.238.171.230:6443: connect: connection refused" node="172-238-171-230" Apr 17 00:07:31.608631 kubelet[2364]: E0417 00:07:31.608595 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:31.609335 containerd[1554]: time="2026-04-17T00:07:31.609304081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-171-230,Uid:3ba9cdf9afc88f9226def4559996a3a2,Namespace:kube-system,Attempt:0,}" Apr 17 00:07:31.619074 kubelet[2364]: E0417 00:07:31.619003 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:31.619485 containerd[1554]: time="2026-04-17T00:07:31.619455256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-171-230,Uid:6ccc3dc7d0eb9b2e10238c08d50fc902,Namespace:kube-system,Attempt:0,}" Apr 17 00:07:31.624811 kubelet[2364]: E0417 00:07:31.624772 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:31.625702 containerd[1554]: time="2026-04-17T00:07:31.625601846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-171-230,Uid:45f450fa5d2d8d6547351084f10f90bc,Namespace:kube-system,Attempt:0,}" Apr 17 00:07:31.629351 containerd[1554]: time="2026-04-17T00:07:31.629323071Z" level=info msg="connecting to shim 1f7877876df6b5f91718a6c661d7270ce7bca396c439a88724d02ea2b92491f2" address="unix:///run/containerd/s/bac3bb24589f48efa114fbbf0f026d349f5c15e0a354838cd0356de82793d6c1" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:31.658151 containerd[1554]: time="2026-04-17T00:07:31.658014434Z" level=info msg="connecting to shim 61d85d295ca8ec1450bc52a900ecfbeb50db12d6e2dcb6beeef74d8baebefa27" address="unix:///run/containerd/s/e2662703a05b140c89ca85e91d3c5ec3162a7f6d1f3443e1bf37ec3023ac6759" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:31.663927 containerd[1554]: time="2026-04-17T00:07:31.663889433Z" level=info msg="connecting to shim 4e2f096a1d68404ad95619a1e680466cce3de8efcff84df10b295e00ecbe9e59" address="unix:///run/containerd/s/2465947f319bf853817025a7bf589fd823ec663f20e5f5086fa8a71a78e03fda" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:31.695353 systemd[1]: Started cri-containerd-1f7877876df6b5f91718a6c661d7270ce7bca396c439a88724d02ea2b92491f2.scope - libcontainer container 1f7877876df6b5f91718a6c661d7270ce7bca396c439a88724d02ea2b92491f2. Apr 17 00:07:31.700025 systemd[1]: Started cri-containerd-4e2f096a1d68404ad95619a1e680466cce3de8efcff84df10b295e00ecbe9e59.scope - libcontainer container 4e2f096a1d68404ad95619a1e680466cce3de8efcff84df10b295e00ecbe9e59. Apr 17 00:07:31.712176 systemd[1]: Started cri-containerd-61d85d295ca8ec1450bc52a900ecfbeb50db12d6e2dcb6beeef74d8baebefa27.scope - libcontainer container 61d85d295ca8ec1450bc52a900ecfbeb50db12d6e2dcb6beeef74d8baebefa27. Apr 17 00:07:31.741254 kubelet[2364]: E0417 00:07:31.741220 2364 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.238.171.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-238-171-230?timeout=10s\": dial tcp 172.238.171.230:6443: connect: connection refused" interval="800ms" Apr 17 00:07:31.766450 containerd[1554]: time="2026-04-17T00:07:31.765957916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-238-171-230,Uid:3ba9cdf9afc88f9226def4559996a3a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f7877876df6b5f91718a6c661d7270ce7bca396c439a88724d02ea2b92491f2\"" Apr 17 00:07:31.767519 kubelet[2364]: E0417 00:07:31.767225 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:31.772430 containerd[1554]: time="2026-04-17T00:07:31.772390906Z" level=info msg="CreateContainer within sandbox \"1f7877876df6b5f91718a6c661d7270ce7bca396c439a88724d02ea2b92491f2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 00:07:31.790698 containerd[1554]: time="2026-04-17T00:07:31.790649483Z" level=info msg="Container 88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:31.793267 containerd[1554]: time="2026-04-17T00:07:31.793168207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-238-171-230,Uid:45f450fa5d2d8d6547351084f10f90bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e2f096a1d68404ad95619a1e680466cce3de8efcff84df10b295e00ecbe9e59\"" Apr 17 00:07:31.795551 kubelet[2364]: E0417 00:07:31.795527 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:31.802691 containerd[1554]: time="2026-04-17T00:07:31.802086340Z" level=info msg="CreateContainer within sandbox \"4e2f096a1d68404ad95619a1e680466cce3de8efcff84df10b295e00ecbe9e59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 00:07:31.803030 containerd[1554]: time="2026-04-17T00:07:31.803006272Z" level=info msg="CreateContainer within sandbox \"1f7877876df6b5f91718a6c661d7270ce7bca396c439a88724d02ea2b92491f2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9\"" Apr 17 00:07:31.804609 containerd[1554]: time="2026-04-17T00:07:31.804566924Z" level=info msg="StartContainer for \"88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9\"" Apr 17 00:07:31.805664 containerd[1554]: time="2026-04-17T00:07:31.805637406Z" level=info msg="connecting to shim 88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9" address="unix:///run/containerd/s/bac3bb24589f48efa114fbbf0f026d349f5c15e0a354838cd0356de82793d6c1" protocol=ttrpc version=3 Apr 17 00:07:31.812161 containerd[1554]: time="2026-04-17T00:07:31.812142985Z" level=info msg="Container fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:31.824603 containerd[1554]: time="2026-04-17T00:07:31.824576144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-238-171-230,Uid:6ccc3dc7d0eb9b2e10238c08d50fc902,Namespace:kube-system,Attempt:0,} returns sandbox id \"61d85d295ca8ec1450bc52a900ecfbeb50db12d6e2dcb6beeef74d8baebefa27\"" Apr 17 00:07:31.825779 kubelet[2364]: E0417 00:07:31.825761 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:31.828725 containerd[1554]: time="2026-04-17T00:07:31.828643110Z" level=info msg="CreateContainer within sandbox \"61d85d295ca8ec1450bc52a900ecfbeb50db12d6e2dcb6beeef74d8baebefa27\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 00:07:31.830007 containerd[1554]: time="2026-04-17T00:07:31.829986222Z" level=info msg="CreateContainer within sandbox \"4e2f096a1d68404ad95619a1e680466cce3de8efcff84df10b295e00ecbe9e59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3\"" Apr 17 00:07:31.831828 containerd[1554]: time="2026-04-17T00:07:31.831810485Z" level=info msg="StartContainer for \"fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3\"" Apr 17 00:07:31.834367 containerd[1554]: time="2026-04-17T00:07:31.832838046Z" level=info msg="connecting to shim fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3" address="unix:///run/containerd/s/2465947f319bf853817025a7bf589fd823ec663f20e5f5086fa8a71a78e03fda" protocol=ttrpc version=3 Apr 17 00:07:31.845009 containerd[1554]: time="2026-04-17T00:07:31.844845684Z" level=info msg="Container 818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:31.845326 systemd[1]: Started cri-containerd-88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9.scope - libcontainer container 88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9. Apr 17 00:07:31.857384 containerd[1554]: time="2026-04-17T00:07:31.857331983Z" level=info msg="CreateContainer within sandbox \"61d85d295ca8ec1450bc52a900ecfbeb50db12d6e2dcb6beeef74d8baebefa27\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492\"" Apr 17 00:07:31.858591 containerd[1554]: time="2026-04-17T00:07:31.858537675Z" level=info msg="StartContainer for \"818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492\"" Apr 17 00:07:31.863233 systemd[1]: Started cri-containerd-fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3.scope - libcontainer container fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3. Apr 17 00:07:31.866873 containerd[1554]: time="2026-04-17T00:07:31.866852007Z" level=info msg="connecting to shim 818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492" address="unix:///run/containerd/s/e2662703a05b140c89ca85e91d3c5ec3162a7f6d1f3443e1bf37ec3023ac6759" protocol=ttrpc version=3 Apr 17 00:07:31.895129 systemd[1]: Started cri-containerd-818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492.scope - libcontainer container 818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492. Apr 17 00:07:31.911676 kubelet[2364]: I0417 00:07:31.911646 2364 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-230" Apr 17 00:07:31.911948 kubelet[2364]: E0417 00:07:31.911921 2364 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.238.171.230:6443/api/v1/nodes\": dial tcp 172.238.171.230:6443: connect: connection refused" node="172-238-171-230" Apr 17 00:07:31.934224 containerd[1554]: time="2026-04-17T00:07:31.934189208Z" level=info msg="StartContainer for \"88f83390ad49a8f913dadb45b93ca152f18bb63b029ca9d51f7c2ab43cd6eff9\" returns successfully" Apr 17 00:07:31.969957 containerd[1554]: time="2026-04-17T00:07:31.969917262Z" level=info msg="StartContainer for \"fc8f09a2279b6f21d95a8cb7d9141322629034aaed24e864dfa0cb51fea91db3\" returns successfully" Apr 17 00:07:31.998862 containerd[1554]: time="2026-04-17T00:07:31.998824785Z" level=info msg="StartContainer for \"818cad01c3ea45b3187e89eadbd0bc7a86630fdf8ef5347a1655f49350020492\" returns successfully" Apr 17 00:07:32.192543 kubelet[2364]: E0417 00:07:32.191714 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:32.192543 kubelet[2364]: E0417 00:07:32.191838 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:32.193738 kubelet[2364]: E0417 00:07:32.193713 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:32.194157 kubelet[2364]: E0417 00:07:32.194139 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:32.197018 kubelet[2364]: E0417 00:07:32.196994 2364 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:32.197151 kubelet[2364]: E0417 00:07:32.197133 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:32.714530 kubelet[2364]: I0417 00:07:32.714488 2364 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-230" Apr 17 00:07:33.052344 kubelet[2364]: E0417 00:07:33.051809 2364 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-238-171-230\" not found" node="172-238-171-230" Apr 17 00:07:33.088894 kubelet[2364]: I0417 00:07:33.088852 2364 kubelet_node_status.go:78] "Successfully registered node" node="172-238-171-230" Apr 17 00:07:33.088894 kubelet[2364]: E0417 00:07:33.088894 2364 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-238-171-230\": node \"172-238-171-230\" not found" Apr 17 00:07:33.108575 kubelet[2364]: I0417 00:07:33.108549 2364 apiserver.go:52] "Watching apiserver" Apr 17 00:07:33.134424 kubelet[2364]: I0417 00:07:33.134352 2364 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 00:07:33.151914 kubelet[2364]: I0417 00:07:33.150289 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:33.156908 kubelet[2364]: E0417 00:07:33.156869 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-171-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:33.156984 kubelet[2364]: I0417 00:07:33.156949 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:33.159066 kubelet[2364]: E0417 00:07:33.158243 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-171-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:33.159066 kubelet[2364]: I0417 00:07:33.158270 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:33.159853 kubelet[2364]: E0417 00:07:33.159821 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-171-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:33.196790 kubelet[2364]: I0417 00:07:33.196747 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:33.197292 kubelet[2364]: I0417 00:07:33.197244 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:33.199844 kubelet[2364]: E0417 00:07:33.199800 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-171-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:33.200014 kubelet[2364]: E0417 00:07:33.199985 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:33.200133 kubelet[2364]: E0417 00:07:33.200112 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-238-171-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:33.200301 kubelet[2364]: E0417 00:07:33.200282 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:33.687645 kubelet[2364]: I0417 00:07:33.687615 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:33.689367 kubelet[2364]: E0417 00:07:33.689337 2364 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-238-171-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:33.689517 kubelet[2364]: E0417 00:07:33.689494 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:34.197999 kubelet[2364]: I0417 00:07:34.197953 2364 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:34.202547 kubelet[2364]: E0417 00:07:34.202530 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:35.072299 systemd[1]: Reload requested from client PID 2640 ('systemctl') (unit session-7.scope)... Apr 17 00:07:35.072317 systemd[1]: Reloading... Apr 17 00:07:35.176077 zram_generator::config[2684]: No configuration found. Apr 17 00:07:35.201883 kubelet[2364]: E0417 00:07:35.199390 2364 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:35.421868 systemd[1]: Reloading finished in 349 ms. Apr 17 00:07:35.446788 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:35.468339 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 00:07:35.468649 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:35.468717 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 130.2M memory peak. Apr 17 00:07:35.471004 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 00:07:35.662870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 00:07:35.669567 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 00:07:35.717883 kubelet[2735]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:07:35.717883 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 00:07:35.717883 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 00:07:35.718613 kubelet[2735]: I0417 00:07:35.717875 2735 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 00:07:35.724751 kubelet[2735]: I0417 00:07:35.724707 2735 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 00:07:35.724841 kubelet[2735]: I0417 00:07:35.724731 2735 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 00:07:35.725143 kubelet[2735]: I0417 00:07:35.725119 2735 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 00:07:35.726529 kubelet[2735]: I0417 00:07:35.726503 2735 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 00:07:35.729002 kubelet[2735]: I0417 00:07:35.728969 2735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 00:07:35.733003 kubelet[2735]: I0417 00:07:35.732971 2735 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 17 00:07:35.738198 kubelet[2735]: I0417 00:07:35.737229 2735 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 00:07:35.738198 kubelet[2735]: I0417 00:07:35.737508 2735 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 00:07:35.738198 kubelet[2735]: I0417 00:07:35.737531 2735 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-238-171-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 00:07:35.738198 kubelet[2735]: I0417 00:07:35.737711 2735 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 00:07:35.738391 kubelet[2735]: I0417 00:07:35.737719 2735 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 00:07:35.738391 kubelet[2735]: I0417 00:07:35.737760 2735 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:07:35.738391 kubelet[2735]: I0417 00:07:35.737971 2735 kubelet.go:480] "Attempting to sync node with API server" Apr 17 00:07:35.738582 kubelet[2735]: I0417 00:07:35.738566 2735 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 00:07:35.740055 kubelet[2735]: I0417 00:07:35.740024 2735 kubelet.go:386] "Adding apiserver pod source" Apr 17 00:07:35.740308 kubelet[2735]: I0417 00:07:35.740297 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 00:07:35.746981 kubelet[2735]: I0417 00:07:35.746447 2735 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 17 00:07:35.748068 kubelet[2735]: I0417 00:07:35.747465 2735 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 00:07:35.751128 kubelet[2735]: I0417 00:07:35.751113 2735 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 00:07:35.751246 kubelet[2735]: I0417 00:07:35.751234 2735 server.go:1289] "Started kubelet" Apr 17 00:07:35.753119 kubelet[2735]: I0417 00:07:35.753107 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 00:07:35.757100 kubelet[2735]: E0417 00:07:35.757063 2735 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 00:07:35.758012 kubelet[2735]: I0417 00:07:35.757974 2735 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 00:07:35.759472 kubelet[2735]: I0417 00:07:35.759449 2735 server.go:317] "Adding debug handlers to kubelet server" Apr 17 00:07:35.763164 kubelet[2735]: I0417 00:07:35.763150 2735 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 00:07:35.763443 kubelet[2735]: I0417 00:07:35.763392 2735 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 00:07:35.763964 kubelet[2735]: I0417 00:07:35.763598 2735 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 00:07:35.763964 kubelet[2735]: I0417 00:07:35.763882 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 00:07:35.764139 kubelet[2735]: I0417 00:07:35.764124 2735 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 00:07:35.764286 kubelet[2735]: I0417 00:07:35.764276 2735 reconciler.go:26] "Reconciler: start to sync state" Apr 17 00:07:35.767654 kubelet[2735]: I0417 00:07:35.767629 2735 factory.go:223] Registration of the systemd container factory successfully Apr 17 00:07:35.767765 kubelet[2735]: I0417 00:07:35.767742 2735 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 00:07:35.769654 kubelet[2735]: I0417 00:07:35.769622 2735 factory.go:223] Registration of the containerd container factory successfully Apr 17 00:07:35.770075 kubelet[2735]: I0417 00:07:35.769912 2735 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 00:07:35.772070 kubelet[2735]: I0417 00:07:35.772021 2735 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 00:07:35.772517 kubelet[2735]: I0417 00:07:35.772135 2735 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 00:07:35.772517 kubelet[2735]: I0417 00:07:35.772166 2735 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 00:07:35.772517 kubelet[2735]: I0417 00:07:35.772173 2735 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 00:07:35.772517 kubelet[2735]: E0417 00:07:35.772231 2735 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 00:07:35.826641 kubelet[2735]: I0417 00:07:35.826605 2735 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 00:07:35.826641 kubelet[2735]: I0417 00:07:35.826627 2735 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 00:07:35.826641 kubelet[2735]: I0417 00:07:35.826644 2735 state_mem.go:36] "Initialized new in-memory state store" Apr 17 00:07:35.826799 kubelet[2735]: I0417 00:07:35.826759 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 00:07:35.826799 kubelet[2735]: I0417 00:07:35.826768 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 00:07:35.826799 kubelet[2735]: I0417 00:07:35.826784 2735 policy_none.go:49] "None policy: Start" Apr 17 00:07:35.826799 kubelet[2735]: I0417 00:07:35.826792 2735 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 00:07:35.826911 kubelet[2735]: I0417 00:07:35.826803 2735 state_mem.go:35] "Initializing new in-memory state store" Apr 17 00:07:35.826911 kubelet[2735]: I0417 00:07:35.826876 2735 state_mem.go:75] "Updated machine memory state" Apr 17 00:07:35.831188 kubelet[2735]: E0417 00:07:35.831167 2735 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 00:07:35.831608 kubelet[2735]: I0417 00:07:35.831320 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 00:07:35.831608 kubelet[2735]: I0417 00:07:35.831335 2735 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 00:07:35.831608 kubelet[2735]: I0417 00:07:35.831522 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 00:07:35.836311 kubelet[2735]: E0417 00:07:35.836280 2735 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 00:07:35.873844 kubelet[2735]: I0417 00:07:35.873800 2735 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:35.874293 kubelet[2735]: I0417 00:07:35.874267 2735 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:35.874585 kubelet[2735]: I0417 00:07:35.874556 2735 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:35.880241 kubelet[2735]: E0417 00:07:35.880133 2735 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-238-171-230\" already exists" pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:35.940985 kubelet[2735]: I0417 00:07:35.940954 2735 kubelet_node_status.go:75] "Attempting to register node" node="172-238-171-230" Apr 17 00:07:35.948170 kubelet[2735]: I0417 00:07:35.948130 2735 kubelet_node_status.go:124] "Node was previously registered" node="172-238-171-230" Apr 17 00:07:35.948288 kubelet[2735]: I0417 00:07:35.948200 2735 kubelet_node_status.go:78] "Successfully registered node" node="172-238-171-230" Apr 17 00:07:35.965648 kubelet[2735]: I0417 00:07:35.965617 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3ba9cdf9afc88f9226def4559996a3a2-ca-certs\") pod \"kube-apiserver-172-238-171-230\" (UID: \"3ba9cdf9afc88f9226def4559996a3a2\") " pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:35.965648 kubelet[2735]: I0417 00:07:35.965650 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3ba9cdf9afc88f9226def4559996a3a2-k8s-certs\") pod \"kube-apiserver-172-238-171-230\" (UID: \"3ba9cdf9afc88f9226def4559996a3a2\") " pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:35.965648 kubelet[2735]: I0417 00:07:35.965669 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-kubeconfig\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:35.965965 kubelet[2735]: I0417 00:07:35.965687 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3ba9cdf9afc88f9226def4559996a3a2-usr-share-ca-certificates\") pod \"kube-apiserver-172-238-171-230\" (UID: \"3ba9cdf9afc88f9226def4559996a3a2\") " pod="kube-system/kube-apiserver-172-238-171-230" Apr 17 00:07:35.965965 kubelet[2735]: I0417 00:07:35.965706 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-ca-certs\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:35.965965 kubelet[2735]: I0417 00:07:35.965721 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-flexvolume-dir\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:35.965965 kubelet[2735]: I0417 00:07:35.965735 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-k8s-certs\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:35.965965 kubelet[2735]: I0417 00:07:35.965749 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ccc3dc7d0eb9b2e10238c08d50fc902-usr-share-ca-certificates\") pod \"kube-controller-manager-172-238-171-230\" (UID: \"6ccc3dc7d0eb9b2e10238c08d50fc902\") " pod="kube-system/kube-controller-manager-172-238-171-230" Apr 17 00:07:35.966155 kubelet[2735]: I0417 00:07:35.965766 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/45f450fa5d2d8d6547351084f10f90bc-kubeconfig\") pod \"kube-scheduler-172-238-171-230\" (UID: \"45f450fa5d2d8d6547351084f10f90bc\") " pod="kube-system/kube-scheduler-172-238-171-230" Apr 17 00:07:36.180816 kubelet[2735]: E0417 00:07:36.180177 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:36.180816 kubelet[2735]: E0417 00:07:36.180318 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:36.181165 kubelet[2735]: E0417 00:07:36.181140 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:36.610524 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 00:07:36.742751 kubelet[2735]: I0417 00:07:36.741557 2735 apiserver.go:52] "Watching apiserver" Apr 17 00:07:36.765139 kubelet[2735]: I0417 00:07:36.765107 2735 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 00:07:36.810990 kubelet[2735]: E0417 00:07:36.810251 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:36.811261 kubelet[2735]: E0417 00:07:36.811242 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:36.811645 kubelet[2735]: E0417 00:07:36.811443 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:36.845711 kubelet[2735]: I0417 00:07:36.845654 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-238-171-230" podStartSLOduration=2.845640655 podStartE2EDuration="2.845640655s" podCreationTimestamp="2026-04-17 00:07:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:07:36.845412385 +0000 UTC m=+1.170581147" watchObservedRunningTime="2026-04-17 00:07:36.845640655 +0000 UTC m=+1.170809407" Apr 17 00:07:36.857672 kubelet[2735]: I0417 00:07:36.857617 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-238-171-230" podStartSLOduration=1.8576013329999999 podStartE2EDuration="1.857601333s" podCreationTimestamp="2026-04-17 00:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:07:36.856853562 +0000 UTC m=+1.182022314" watchObservedRunningTime="2026-04-17 00:07:36.857601333 +0000 UTC m=+1.182770085" Apr 17 00:07:37.813579 kubelet[2735]: E0417 00:07:37.813523 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:37.814701 kubelet[2735]: E0417 00:07:37.814072 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:40.068824 kubelet[2735]: E0417 00:07:40.068789 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:41.521549 kubelet[2735]: I0417 00:07:41.521485 2735 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 00:07:41.522105 containerd[1554]: time="2026-04-17T00:07:41.521703259Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 00:07:41.522488 kubelet[2735]: I0417 00:07:41.522471 2735 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 00:07:42.556168 kubelet[2735]: I0417 00:07:42.556089 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-238-171-230" podStartSLOduration=7.554995709 podStartE2EDuration="7.554995709s" podCreationTimestamp="2026-04-17 00:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:07:36.865980136 +0000 UTC m=+1.191148888" watchObservedRunningTime="2026-04-17 00:07:42.554995709 +0000 UTC m=+6.880164461" Apr 17 00:07:42.570508 systemd[1]: Created slice kubepods-besteffort-podf2a952bd_79e1_4fac_9843_90e0aa882e9e.slice - libcontainer container kubepods-besteffort-podf2a952bd_79e1_4fac_9843_90e0aa882e9e.slice. Apr 17 00:07:42.611498 kubelet[2735]: I0417 00:07:42.611457 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqwrn\" (UniqueName: \"kubernetes.io/projected/f2a952bd-79e1-4fac-9843-90e0aa882e9e-kube-api-access-vqwrn\") pod \"kube-proxy-8fdrv\" (UID: \"f2a952bd-79e1-4fac-9843-90e0aa882e9e\") " pod="kube-system/kube-proxy-8fdrv" Apr 17 00:07:42.611498 kubelet[2735]: I0417 00:07:42.611502 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f2a952bd-79e1-4fac-9843-90e0aa882e9e-kube-proxy\") pod \"kube-proxy-8fdrv\" (UID: \"f2a952bd-79e1-4fac-9843-90e0aa882e9e\") " pod="kube-system/kube-proxy-8fdrv" Apr 17 00:07:42.611498 kubelet[2735]: I0417 00:07:42.611523 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2a952bd-79e1-4fac-9843-90e0aa882e9e-lib-modules\") pod \"kube-proxy-8fdrv\" (UID: \"f2a952bd-79e1-4fac-9843-90e0aa882e9e\") " pod="kube-system/kube-proxy-8fdrv" Apr 17 00:07:42.611776 kubelet[2735]: I0417 00:07:42.611540 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2a952bd-79e1-4fac-9843-90e0aa882e9e-xtables-lock\") pod \"kube-proxy-8fdrv\" (UID: \"f2a952bd-79e1-4fac-9843-90e0aa882e9e\") " pod="kube-system/kube-proxy-8fdrv" Apr 17 00:07:42.686215 systemd[1]: Created slice kubepods-besteffort-podd3e3d7dd_7a46_40ca_a94b_25a31ef0197e.slice - libcontainer container kubepods-besteffort-podd3e3d7dd_7a46_40ca_a94b_25a31ef0197e.slice. Apr 17 00:07:42.713352 kubelet[2735]: I0417 00:07:42.712654 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l44jl\" (UniqueName: \"kubernetes.io/projected/d3e3d7dd-7a46-40ca-a94b-25a31ef0197e-kube-api-access-l44jl\") pod \"tigera-operator-6bf85f8dd-9fvgb\" (UID: \"d3e3d7dd-7a46-40ca-a94b-25a31ef0197e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-9fvgb" Apr 17 00:07:42.713352 kubelet[2735]: I0417 00:07:42.712694 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d3e3d7dd-7a46-40ca-a94b-25a31ef0197e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-9fvgb\" (UID: \"d3e3d7dd-7a46-40ca-a94b-25a31ef0197e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-9fvgb" Apr 17 00:07:42.883401 kubelet[2735]: E0417 00:07:42.883366 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:42.883972 containerd[1554]: time="2026-04-17T00:07:42.883935732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fdrv,Uid:f2a952bd-79e1-4fac-9843-90e0aa882e9e,Namespace:kube-system,Attempt:0,}" Apr 17 00:07:42.888074 kubelet[2735]: E0417 00:07:42.887772 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:42.913951 containerd[1554]: time="2026-04-17T00:07:42.913872737Z" level=info msg="connecting to shim 75a250f8f01698ad23382e13e7be8be01cb77bef183cd77603bda7516cada031" address="unix:///run/containerd/s/bdca7fe161081c1b9027262360691a060e7ee4bc714cd44d235464843856eb5d" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:42.944345 systemd[1]: Started cri-containerd-75a250f8f01698ad23382e13e7be8be01cb77bef183cd77603bda7516cada031.scope - libcontainer container 75a250f8f01698ad23382e13e7be8be01cb77bef183cd77603bda7516cada031. Apr 17 00:07:42.986090 containerd[1554]: time="2026-04-17T00:07:42.986026745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8fdrv,Uid:f2a952bd-79e1-4fac-9843-90e0aa882e9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"75a250f8f01698ad23382e13e7be8be01cb77bef183cd77603bda7516cada031\"" Apr 17 00:07:42.986752 kubelet[2735]: E0417 00:07:42.986733 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:42.992111 containerd[1554]: time="2026-04-17T00:07:42.990329782Z" level=info msg="CreateContainer within sandbox \"75a250f8f01698ad23382e13e7be8be01cb77bef183cd77603bda7516cada031\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 00:07:42.992313 containerd[1554]: time="2026-04-17T00:07:42.992295444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-9fvgb,Uid:d3e3d7dd-7a46-40ca-a94b-25a31ef0197e,Namespace:tigera-operator,Attempt:0,}" Apr 17 00:07:43.007860 containerd[1554]: time="2026-04-17T00:07:43.007116547Z" level=info msg="Container e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:43.012344 containerd[1554]: time="2026-04-17T00:07:43.012323254Z" level=info msg="CreateContainer within sandbox \"75a250f8f01698ad23382e13e7be8be01cb77bef183cd77603bda7516cada031\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367\"" Apr 17 00:07:43.013004 containerd[1554]: time="2026-04-17T00:07:43.012974665Z" level=info msg="StartContainer for \"e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367\"" Apr 17 00:07:43.014697 containerd[1554]: time="2026-04-17T00:07:43.014663508Z" level=info msg="connecting to shim e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367" address="unix:///run/containerd/s/bdca7fe161081c1b9027262360691a060e7ee4bc714cd44d235464843856eb5d" protocol=ttrpc version=3 Apr 17 00:07:43.018170 containerd[1554]: time="2026-04-17T00:07:43.018148873Z" level=info msg="connecting to shim d2f77972623b60c8b47629eaa1fe99845c5ba53d25ec47935ed4a40e02c70e7e" address="unix:///run/containerd/s/014e092a04093932ea5d1331ee4b635b2f41938ebf71b31f59b0e66fdfd5162a" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:43.037218 systemd[1]: Started cri-containerd-e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367.scope - libcontainer container e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367. Apr 17 00:07:43.044172 systemd[1]: Started cri-containerd-d2f77972623b60c8b47629eaa1fe99845c5ba53d25ec47935ed4a40e02c70e7e.scope - libcontainer container d2f77972623b60c8b47629eaa1fe99845c5ba53d25ec47935ed4a40e02c70e7e. Apr 17 00:07:43.104567 containerd[1554]: time="2026-04-17T00:07:43.104522853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-9fvgb,Uid:d3e3d7dd-7a46-40ca-a94b-25a31ef0197e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d2f77972623b60c8b47629eaa1fe99845c5ba53d25ec47935ed4a40e02c70e7e\"" Apr 17 00:07:43.110204 containerd[1554]: time="2026-04-17T00:07:43.108924989Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 00:07:43.124142 containerd[1554]: time="2026-04-17T00:07:43.124098122Z" level=info msg="StartContainer for \"e43ca0e2b945692812084a2648e14e38f241398c53ae0c2ff5fb136ab36c1367\" returns successfully" Apr 17 00:07:43.761063 kubelet[2735]: E0417 00:07:43.760182 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:43.824463 kubelet[2735]: E0417 00:07:43.823609 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:43.824463 kubelet[2735]: E0417 00:07:43.823833 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:43.824463 kubelet[2735]: E0417 00:07:43.824164 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:43.965627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128648857.mount: Deactivated successfully. Apr 17 00:07:44.824961 kubelet[2735]: E0417 00:07:44.824915 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:45.449862 containerd[1554]: time="2026-04-17T00:07:45.449813850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:45.450857 containerd[1554]: time="2026-04-17T00:07:45.450681502Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 00:07:45.451545 containerd[1554]: time="2026-04-17T00:07:45.451515593Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:45.453165 containerd[1554]: time="2026-04-17T00:07:45.453132025Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:45.454061 containerd[1554]: time="2026-04-17T00:07:45.454019267Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.345054028s" Apr 17 00:07:45.454152 containerd[1554]: time="2026-04-17T00:07:45.454134617Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 00:07:45.457888 containerd[1554]: time="2026-04-17T00:07:45.457855973Z" level=info msg="CreateContainer within sandbox \"d2f77972623b60c8b47629eaa1fe99845c5ba53d25ec47935ed4a40e02c70e7e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 00:07:45.463575 containerd[1554]: time="2026-04-17T00:07:45.463166350Z" level=info msg="Container cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:45.477006 containerd[1554]: time="2026-04-17T00:07:45.476972651Z" level=info msg="CreateContainer within sandbox \"d2f77972623b60c8b47629eaa1fe99845c5ba53d25ec47935ed4a40e02c70e7e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e\"" Apr 17 00:07:45.477422 containerd[1554]: time="2026-04-17T00:07:45.477373512Z" level=info msg="StartContainer for \"cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e\"" Apr 17 00:07:45.478703 containerd[1554]: time="2026-04-17T00:07:45.478667354Z" level=info msg="connecting to shim cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e" address="unix:///run/containerd/s/014e092a04093932ea5d1331ee4b635b2f41938ebf71b31f59b0e66fdfd5162a" protocol=ttrpc version=3 Apr 17 00:07:45.502193 systemd[1]: Started cri-containerd-cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e.scope - libcontainer container cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e. Apr 17 00:07:45.534446 containerd[1554]: time="2026-04-17T00:07:45.534326487Z" level=info msg="StartContainer for \"cf6387184f90bb652702e95038d5f84e6d343a3e180943f747b09f66aca1605e\" returns successfully" Apr 17 00:07:45.836075 kubelet[2735]: I0417 00:07:45.835815 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8fdrv" podStartSLOduration=3.835798569 podStartE2EDuration="3.835798569s" podCreationTimestamp="2026-04-17 00:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:07:43.838365173 +0000 UTC m=+8.163533935" watchObservedRunningTime="2026-04-17 00:07:45.835798569 +0000 UTC m=+10.160967321" Apr 17 00:07:50.075275 kubelet[2735]: E0417 00:07:50.075205 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:50.084073 kubelet[2735]: I0417 00:07:50.083921 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-9fvgb" podStartSLOduration=5.73749481 podStartE2EDuration="8.083907469s" podCreationTimestamp="2026-04-17 00:07:42 +0000 UTC" firstStartedPulling="2026-04-17 00:07:43.108492859 +0000 UTC m=+7.433661611" lastFinishedPulling="2026-04-17 00:07:45.454905518 +0000 UTC m=+9.780074270" observedRunningTime="2026-04-17 00:07:45.838193173 +0000 UTC m=+10.163361925" watchObservedRunningTime="2026-04-17 00:07:50.083907469 +0000 UTC m=+14.409076221" Apr 17 00:07:50.836331 kubelet[2735]: E0417 00:07:50.836273 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:51.147129 sudo[1797]: pam_unix(sudo:session): session closed for user root Apr 17 00:07:51.243174 sshd[1796]: Connection closed by 20.229.252.112 port 51770 Apr 17 00:07:51.244272 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Apr 17 00:07:51.254137 systemd-logind[1526]: Session 7 logged out. Waiting for processes to exit. Apr 17 00:07:51.255619 systemd[1]: sshd@6-172.238.171.230:22-20.229.252.112:51770.service: Deactivated successfully. Apr 17 00:07:51.263212 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 00:07:51.264213 systemd[1]: session-7.scope: Consumed 5.186s CPU time, 233.6M memory peak. Apr 17 00:07:51.269948 systemd-logind[1526]: Removed session 7. Apr 17 00:07:51.438435 update_engine[1539]: I20260417 00:07:51.436300 1539 update_attempter.cc:509] Updating boot flags... Apr 17 00:07:53.992014 systemd[1]: Created slice kubepods-besteffort-poded434dd5_53bb_4d39_ac2c_9c1299d0f6d8.slice - libcontainer container kubepods-besteffort-poded434dd5_53bb_4d39_ac2c_9c1299d0f6d8.slice. Apr 17 00:07:54.091511 kubelet[2735]: I0417 00:07:54.091421 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8-tigera-ca-bundle\") pod \"calico-typha-58d5dbcd8f-4qjlw\" (UID: \"ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8\") " pod="calico-system/calico-typha-58d5dbcd8f-4qjlw" Apr 17 00:07:54.092678 kubelet[2735]: I0417 00:07:54.091995 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8-typha-certs\") pod \"calico-typha-58d5dbcd8f-4qjlw\" (UID: \"ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8\") " pod="calico-system/calico-typha-58d5dbcd8f-4qjlw" Apr 17 00:07:54.092678 kubelet[2735]: I0417 00:07:54.092131 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9x6g\" (UniqueName: \"kubernetes.io/projected/ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8-kube-api-access-r9x6g\") pod \"calico-typha-58d5dbcd8f-4qjlw\" (UID: \"ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8\") " pod="calico-system/calico-typha-58d5dbcd8f-4qjlw" Apr 17 00:07:54.102185 systemd[1]: Created slice kubepods-besteffort-pod25924e5f_70ac_4c8a_a245_b4c7237cac87.slice - libcontainer container kubepods-besteffort-pod25924e5f_70ac_4c8a_a245_b4c7237cac87.slice. Apr 17 00:07:54.194159 kubelet[2735]: I0417 00:07:54.193574 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-var-lib-calico\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194159 kubelet[2735]: I0417 00:07:54.193614 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-flexvol-driver-host\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194159 kubelet[2735]: I0417 00:07:54.193631 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-lib-modules\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194159 kubelet[2735]: I0417 00:07:54.193656 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/25924e5f-70ac-4c8a-a245-b4c7237cac87-node-certs\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194159 kubelet[2735]: I0417 00:07:54.193684 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-cni-net-dir\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194411 kubelet[2735]: I0417 00:07:54.193711 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-nodeproc\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194411 kubelet[2735]: I0417 00:07:54.193730 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-policysync\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194411 kubelet[2735]: I0417 00:07:54.193746 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82hp7\" (UniqueName: \"kubernetes.io/projected/25924e5f-70ac-4c8a-a245-b4c7237cac87-kube-api-access-82hp7\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194411 kubelet[2735]: I0417 00:07:54.193760 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-cni-log-dir\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194411 kubelet[2735]: I0417 00:07:54.193784 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-bpffs\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194536 kubelet[2735]: I0417 00:07:54.193798 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-sys-fs\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194536 kubelet[2735]: I0417 00:07:54.193835 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-cni-bin-dir\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194536 kubelet[2735]: I0417 00:07:54.193859 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25924e5f-70ac-4c8a-a245-b4c7237cac87-tigera-ca-bundle\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194536 kubelet[2735]: I0417 00:07:54.193883 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-var-run-calico\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.194536 kubelet[2735]: I0417 00:07:54.193904 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25924e5f-70ac-4c8a-a245-b4c7237cac87-xtables-lock\") pod \"calico-node-f4jt7\" (UID: \"25924e5f-70ac-4c8a-a245-b4c7237cac87\") " pod="calico-system/calico-node-f4jt7" Apr 17 00:07:54.209290 kubelet[2735]: E0417 00:07:54.208253 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glxlh" podUID="0d2c4fdb-3d93-490b-aa53-b22402e33fe4" Apr 17 00:07:54.295291 kubelet[2735]: I0417 00:07:54.295148 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9k7d\" (UniqueName: \"kubernetes.io/projected/0d2c4fdb-3d93-490b-aa53-b22402e33fe4-kube-api-access-n9k7d\") pod \"csi-node-driver-glxlh\" (UID: \"0d2c4fdb-3d93-490b-aa53-b22402e33fe4\") " pod="calico-system/csi-node-driver-glxlh" Apr 17 00:07:54.295291 kubelet[2735]: I0417 00:07:54.295194 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0d2c4fdb-3d93-490b-aa53-b22402e33fe4-registration-dir\") pod \"csi-node-driver-glxlh\" (UID: \"0d2c4fdb-3d93-490b-aa53-b22402e33fe4\") " pod="calico-system/csi-node-driver-glxlh" Apr 17 00:07:54.295291 kubelet[2735]: I0417 00:07:54.295254 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d2c4fdb-3d93-490b-aa53-b22402e33fe4-kubelet-dir\") pod \"csi-node-driver-glxlh\" (UID: \"0d2c4fdb-3d93-490b-aa53-b22402e33fe4\") " pod="calico-system/csi-node-driver-glxlh" Apr 17 00:07:54.297349 kubelet[2735]: E0417 00:07:54.297302 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:54.298915 kubelet[2735]: I0417 00:07:54.298806 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0d2c4fdb-3d93-490b-aa53-b22402e33fe4-socket-dir\") pod \"csi-node-driver-glxlh\" (UID: \"0d2c4fdb-3d93-490b-aa53-b22402e33fe4\") " pod="calico-system/csi-node-driver-glxlh" Apr 17 00:07:54.299163 kubelet[2735]: I0417 00:07:54.299121 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0d2c4fdb-3d93-490b-aa53-b22402e33fe4-varrun\") pod \"csi-node-driver-glxlh\" (UID: \"0d2c4fdb-3d93-490b-aa53-b22402e33fe4\") " pod="calico-system/csi-node-driver-glxlh" Apr 17 00:07:54.300264 containerd[1554]: time="2026-04-17T00:07:54.300226048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58d5dbcd8f-4qjlw,Uid:ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8,Namespace:calico-system,Attempt:0,}" Apr 17 00:07:54.304896 kubelet[2735]: E0417 00:07:54.304866 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.305021 kubelet[2735]: W0417 00:07:54.304886 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.305412 kubelet[2735]: E0417 00:07:54.304955 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.305782 kubelet[2735]: E0417 00:07:54.305738 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.306021 kubelet[2735]: W0417 00:07:54.305979 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.306021 kubelet[2735]: E0417 00:07:54.306007 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.307693 kubelet[2735]: E0417 00:07:54.307609 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.307747 kubelet[2735]: W0417 00:07:54.307724 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.307747 kubelet[2735]: E0417 00:07:54.307740 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.308578 kubelet[2735]: E0417 00:07:54.308467 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.308578 kubelet[2735]: W0417 00:07:54.308519 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.308578 kubelet[2735]: E0417 00:07:54.308539 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.309974 kubelet[2735]: E0417 00:07:54.309949 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.310221 kubelet[2735]: W0417 00:07:54.309968 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.310221 kubelet[2735]: E0417 00:07:54.309988 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.310311 kubelet[2735]: E0417 00:07:54.310288 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.310526 kubelet[2735]: W0417 00:07:54.310307 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.310526 kubelet[2735]: E0417 00:07:54.310511 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.311408 kubelet[2735]: E0417 00:07:54.311387 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.311408 kubelet[2735]: W0417 00:07:54.311406 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.311585 kubelet[2735]: E0417 00:07:54.311417 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.311613 kubelet[2735]: E0417 00:07:54.311602 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.311613 kubelet[2735]: W0417 00:07:54.311611 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.311653 kubelet[2735]: E0417 00:07:54.311619 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.311973 kubelet[2735]: E0417 00:07:54.311918 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.311973 kubelet[2735]: W0417 00:07:54.311932 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.311973 kubelet[2735]: E0417 00:07:54.311944 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.318088 kubelet[2735]: E0417 00:07:54.317616 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.318088 kubelet[2735]: W0417 00:07:54.317630 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.318088 kubelet[2735]: E0417 00:07:54.317640 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.337395 kubelet[2735]: E0417 00:07:54.337364 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.337395 kubelet[2735]: W0417 00:07:54.337388 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.337539 kubelet[2735]: E0417 00:07:54.337406 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.341674 containerd[1554]: time="2026-04-17T00:07:54.341561659Z" level=info msg="connecting to shim 6e699574fe1edcdcbf0d4021cb63af5c5827b3be8095a1db70ae97be3a345059" address="unix:///run/containerd/s/2c094563bac3fd490e73b33afbda6335980697f71d28eb39d535a4db4417c504" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:54.376405 systemd[1]: Started cri-containerd-6e699574fe1edcdcbf0d4021cb63af5c5827b3be8095a1db70ae97be3a345059.scope - libcontainer container 6e699574fe1edcdcbf0d4021cb63af5c5827b3be8095a1db70ae97be3a345059. Apr 17 00:07:54.400749 kubelet[2735]: E0417 00:07:54.400716 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.400749 kubelet[2735]: W0417 00:07:54.400739 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.400749 kubelet[2735]: E0417 00:07:54.400759 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.401609 kubelet[2735]: E0417 00:07:54.401591 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.401609 kubelet[2735]: W0417 00:07:54.401605 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.401706 kubelet[2735]: E0417 00:07:54.401616 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.402035 kubelet[2735]: E0417 00:07:54.402009 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.402035 kubelet[2735]: W0417 00:07:54.402279 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.402035 kubelet[2735]: E0417 00:07:54.402309 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.402849 kubelet[2735]: E0417 00:07:54.402819 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.402849 kubelet[2735]: W0417 00:07:54.402830 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.402954 kubelet[2735]: E0417 00:07:54.402939 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.403536 kubelet[2735]: E0417 00:07:54.403507 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.403536 kubelet[2735]: W0417 00:07:54.403516 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.403638 kubelet[2735]: E0417 00:07:54.403624 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.404283 kubelet[2735]: E0417 00:07:54.404245 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.404283 kubelet[2735]: W0417 00:07:54.404260 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.404283 kubelet[2735]: E0417 00:07:54.404271 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.404716 kubelet[2735]: E0417 00:07:54.404682 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.404775 kubelet[2735]: W0417 00:07:54.404695 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.404930 kubelet[2735]: E0417 00:07:54.404856 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.405423 kubelet[2735]: E0417 00:07:54.405408 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.405511 kubelet[2735]: W0417 00:07:54.405498 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.405599 kubelet[2735]: E0417 00:07:54.405582 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.406238 kubelet[2735]: E0417 00:07:54.406194 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.406238 kubelet[2735]: W0417 00:07:54.406206 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.406238 kubelet[2735]: E0417 00:07:54.406215 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.406572 kubelet[2735]: E0417 00:07:54.406540 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.406572 kubelet[2735]: W0417 00:07:54.406551 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.406572 kubelet[2735]: E0417 00:07:54.406560 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.406908 kubelet[2735]: E0417 00:07:54.406893 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.407001 kubelet[2735]: W0417 00:07:54.406963 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.407001 kubelet[2735]: E0417 00:07:54.406977 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.407431 kubelet[2735]: E0417 00:07:54.407421 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.407523 kubelet[2735]: W0417 00:07:54.407479 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.407523 kubelet[2735]: E0417 00:07:54.407491 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.407890 kubelet[2735]: E0417 00:07:54.407879 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.407956 kubelet[2735]: W0417 00:07:54.407944 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.408032 kubelet[2735]: E0417 00:07:54.407998 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.408934 kubelet[2735]: E0417 00:07:54.408697 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.409027 kubelet[2735]: W0417 00:07:54.408997 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.409027 kubelet[2735]: E0417 00:07:54.409014 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.410221 kubelet[2735]: E0417 00:07:54.410186 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.410221 kubelet[2735]: W0417 00:07:54.410198 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.410221 kubelet[2735]: E0417 00:07:54.410207 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.411060 kubelet[2735]: E0417 00:07:54.410970 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.411060 kubelet[2735]: W0417 00:07:54.410983 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.411060 kubelet[2735]: E0417 00:07:54.410994 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.411384 kubelet[2735]: E0417 00:07:54.411372 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.411654 containerd[1554]: time="2026-04-17T00:07:54.411510258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f4jt7,Uid:25924e5f-70ac-4c8a-a245-b4c7237cac87,Namespace:calico-system,Attempt:0,}" Apr 17 00:07:54.411726 kubelet[2735]: W0417 00:07:54.411714 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.411785 kubelet[2735]: E0417 00:07:54.411775 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.412143 kubelet[2735]: E0417 00:07:54.412131 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.412279 kubelet[2735]: W0417 00:07:54.412210 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.412342 kubelet[2735]: E0417 00:07:54.412327 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.412842 kubelet[2735]: E0417 00:07:54.412764 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.412842 kubelet[2735]: W0417 00:07:54.412775 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.412842 kubelet[2735]: E0417 00:07:54.412784 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.413643 kubelet[2735]: E0417 00:07:54.413368 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.413643 kubelet[2735]: W0417 00:07:54.413379 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.413643 kubelet[2735]: E0417 00:07:54.413388 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.413643 kubelet[2735]: E0417 00:07:54.413614 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.413643 kubelet[2735]: W0417 00:07:54.413623 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.413643 kubelet[2735]: E0417 00:07:54.413631 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.414187 kubelet[2735]: E0417 00:07:54.414159 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.414662 kubelet[2735]: W0417 00:07:54.414171 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.414662 kubelet[2735]: E0417 00:07:54.414469 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.415504 kubelet[2735]: E0417 00:07:54.415465 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.415504 kubelet[2735]: W0417 00:07:54.415476 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.415504 kubelet[2735]: E0417 00:07:54.415486 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.416889 kubelet[2735]: E0417 00:07:54.416875 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.416973 kubelet[2735]: W0417 00:07:54.416960 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.417037 kubelet[2735]: E0417 00:07:54.417025 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.417851 kubelet[2735]: E0417 00:07:54.417838 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.417991 kubelet[2735]: W0417 00:07:54.417978 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.418419 kubelet[2735]: E0417 00:07:54.418365 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.426398 kubelet[2735]: E0417 00:07:54.426365 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:54.426398 kubelet[2735]: W0417 00:07:54.426390 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:54.426600 kubelet[2735]: E0417 00:07:54.426411 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:54.439609 containerd[1554]: time="2026-04-17T00:07:54.439563632Z" level=info msg="connecting to shim 862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb" address="unix:///run/containerd/s/1696f826a99804a8e1e029a121eba0f1764f0510351191dbc3833d135e91536a" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:07:54.457199 containerd[1554]: time="2026-04-17T00:07:54.457093812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58d5dbcd8f-4qjlw,Uid:ed434dd5-53bb-4d39-ac2c-9c1299d0f6d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e699574fe1edcdcbf0d4021cb63af5c5827b3be8095a1db70ae97be3a345059\"" Apr 17 00:07:54.459512 kubelet[2735]: E0417 00:07:54.458991 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:54.463668 containerd[1554]: time="2026-04-17T00:07:54.463627581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 00:07:54.477300 systemd[1]: Started cri-containerd-862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb.scope - libcontainer container 862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb. Apr 17 00:07:54.510504 containerd[1554]: time="2026-04-17T00:07:54.510429958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-f4jt7,Uid:25924e5f-70ac-4c8a-a245-b4c7237cac87,Namespace:calico-system,Attempt:0,} returns sandbox id \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\"" Apr 17 00:07:55.699636 containerd[1554]: time="2026-04-17T00:07:55.699598944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:55.700446 containerd[1554]: time="2026-04-17T00:07:55.700270596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 00:07:55.701007 containerd[1554]: time="2026-04-17T00:07:55.700980867Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:55.702732 containerd[1554]: time="2026-04-17T00:07:55.702712285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:55.703209 containerd[1554]: time="2026-04-17T00:07:55.703168519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.239500069s" Apr 17 00:07:55.703209 containerd[1554]: time="2026-04-17T00:07:55.703196839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 00:07:55.704397 containerd[1554]: time="2026-04-17T00:07:55.704273075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 00:07:55.721475 containerd[1554]: time="2026-04-17T00:07:55.721434225Z" level=info msg="CreateContainer within sandbox \"6e699574fe1edcdcbf0d4021cb63af5c5827b3be8095a1db70ae97be3a345059\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 00:07:55.729813 containerd[1554]: time="2026-04-17T00:07:55.728848470Z" level=info msg="Container 1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:55.736200 containerd[1554]: time="2026-04-17T00:07:55.736167817Z" level=info msg="CreateContainer within sandbox \"6e699574fe1edcdcbf0d4021cb63af5c5827b3be8095a1db70ae97be3a345059\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c\"" Apr 17 00:07:55.736676 containerd[1554]: time="2026-04-17T00:07:55.736658821Z" level=info msg="StartContainer for \"1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c\"" Apr 17 00:07:55.738483 containerd[1554]: time="2026-04-17T00:07:55.738175931Z" level=info msg="connecting to shim 1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c" address="unix:///run/containerd/s/2c094563bac3fd490e73b33afbda6335980697f71d28eb39d535a4db4417c504" protocol=ttrpc version=3 Apr 17 00:07:55.768180 systemd[1]: Started cri-containerd-1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c.scope - libcontainer container 1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c. Apr 17 00:07:55.774356 kubelet[2735]: E0417 00:07:55.774244 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glxlh" podUID="0d2c4fdb-3d93-490b-aa53-b22402e33fe4" Apr 17 00:07:55.828357 containerd[1554]: time="2026-04-17T00:07:55.828305978Z" level=info msg="StartContainer for \"1fb05ea0ea8d028e6b58a277cb1cededbd02035e46ada0ac41204e499795f79c\" returns successfully" Apr 17 00:07:55.849685 kubelet[2735]: E0417 00:07:55.849657 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:55.861785 kubelet[2735]: I0417 00:07:55.861720 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58d5dbcd8f-4qjlw" podStartSLOduration=1.62085423 podStartE2EDuration="2.86170834s" podCreationTimestamp="2026-04-17 00:07:53 +0000 UTC" firstStartedPulling="2026-04-17 00:07:54.463220147 +0000 UTC m=+18.788388899" lastFinishedPulling="2026-04-17 00:07:55.704074257 +0000 UTC m=+20.029243009" observedRunningTime="2026-04-17 00:07:55.861222917 +0000 UTC m=+20.186391669" watchObservedRunningTime="2026-04-17 00:07:55.86170834 +0000 UTC m=+20.186877092" Apr 17 00:07:55.888398 kubelet[2735]: E0417 00:07:55.888365 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.888398 kubelet[2735]: W0417 00:07:55.888386 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.889014 kubelet[2735]: E0417 00:07:55.888675 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.889285 kubelet[2735]: E0417 00:07:55.889105 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.889285 kubelet[2735]: W0417 00:07:55.889114 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.889285 kubelet[2735]: E0417 00:07:55.889122 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.891135 kubelet[2735]: E0417 00:07:55.891108 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.891135 kubelet[2735]: W0417 00:07:55.891128 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.891135 kubelet[2735]: E0417 00:07:55.891138 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.891625 kubelet[2735]: E0417 00:07:55.891523 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.891665 kubelet[2735]: W0417 00:07:55.891531 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.891665 kubelet[2735]: E0417 00:07:55.891650 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.892216 kubelet[2735]: E0417 00:07:55.892102 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.892216 kubelet[2735]: W0417 00:07:55.892118 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.892216 kubelet[2735]: E0417 00:07:55.892126 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.893400 kubelet[2735]: E0417 00:07:55.892683 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.893400 kubelet[2735]: W0417 00:07:55.892697 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.893400 kubelet[2735]: E0417 00:07:55.892706 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.893400 kubelet[2735]: E0417 00:07:55.892863 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.893400 kubelet[2735]: W0417 00:07:55.892870 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.893400 kubelet[2735]: E0417 00:07:55.892877 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.894516 kubelet[2735]: E0417 00:07:55.894487 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.894516 kubelet[2735]: W0417 00:07:55.894508 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.894516 kubelet[2735]: E0417 00:07:55.894518 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.895063 kubelet[2735]: E0417 00:07:55.894925 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.895063 kubelet[2735]: W0417 00:07:55.894939 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.895063 kubelet[2735]: E0417 00:07:55.894948 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.895431 kubelet[2735]: E0417 00:07:55.895156 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.895431 kubelet[2735]: W0417 00:07:55.895168 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.895431 kubelet[2735]: E0417 00:07:55.895175 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.895669 kubelet[2735]: E0417 00:07:55.895580 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.895669 kubelet[2735]: W0417 00:07:55.895588 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.895669 kubelet[2735]: E0417 00:07:55.895595 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.897227 kubelet[2735]: E0417 00:07:55.897204 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.897227 kubelet[2735]: W0417 00:07:55.897220 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.897227 kubelet[2735]: E0417 00:07:55.897228 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.897501 kubelet[2735]: E0417 00:07:55.897386 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.897501 kubelet[2735]: W0417 00:07:55.897400 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.897501 kubelet[2735]: E0417 00:07:55.897408 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.897680 kubelet[2735]: E0417 00:07:55.897558 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.897680 kubelet[2735]: W0417 00:07:55.897565 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.897680 kubelet[2735]: E0417 00:07:55.897572 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.897680 kubelet[2735]: E0417 00:07:55.897721 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.897680 kubelet[2735]: W0417 00:07:55.897727 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.897680 kubelet[2735]: E0417 00:07:55.897734 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.917801 kubelet[2735]: E0417 00:07:55.917734 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.917801 kubelet[2735]: W0417 00:07:55.917774 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.917801 kubelet[2735]: E0417 00:07:55.917790 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.918238 kubelet[2735]: E0417 00:07:55.918201 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.918276 kubelet[2735]: W0417 00:07:55.918258 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.918325 kubelet[2735]: E0417 00:07:55.918275 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.918895 kubelet[2735]: E0417 00:07:55.918624 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.918895 kubelet[2735]: W0417 00:07:55.918636 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.918895 kubelet[2735]: E0417 00:07:55.918674 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.919351 kubelet[2735]: E0417 00:07:55.919298 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.919351 kubelet[2735]: W0417 00:07:55.919326 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.919351 kubelet[2735]: E0417 00:07:55.919338 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.920365 kubelet[2735]: E0417 00:07:55.920333 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.920365 kubelet[2735]: W0417 00:07:55.920352 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.920490 kubelet[2735]: E0417 00:07:55.920362 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.921240 kubelet[2735]: E0417 00:07:55.920882 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.921240 kubelet[2735]: W0417 00:07:55.920925 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.921240 kubelet[2735]: E0417 00:07:55.920940 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.921485 kubelet[2735]: E0417 00:07:55.921462 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.921485 kubelet[2735]: W0417 00:07:55.921480 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.921554 kubelet[2735]: E0417 00:07:55.921489 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.921887 kubelet[2735]: E0417 00:07:55.921856 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.921887 kubelet[2735]: W0417 00:07:55.921872 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.921887 kubelet[2735]: E0417 00:07:55.921880 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.922422 kubelet[2735]: E0417 00:07:55.922390 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.922422 kubelet[2735]: W0417 00:07:55.922408 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.922422 kubelet[2735]: E0417 00:07:55.922420 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.922914 kubelet[2735]: E0417 00:07:55.922889 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.922914 kubelet[2735]: W0417 00:07:55.922907 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.922970 kubelet[2735]: E0417 00:07:55.922917 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.923939 kubelet[2735]: E0417 00:07:55.923911 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.923939 kubelet[2735]: W0417 00:07:55.923929 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.923939 kubelet[2735]: E0417 00:07:55.923938 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.924595 kubelet[2735]: E0417 00:07:55.924567 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.924595 kubelet[2735]: W0417 00:07:55.924583 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.924595 kubelet[2735]: E0417 00:07:55.924592 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.925527 kubelet[2735]: E0417 00:07:55.925503 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.925527 kubelet[2735]: W0417 00:07:55.925520 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.925527 kubelet[2735]: E0417 00:07:55.925529 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.925987 kubelet[2735]: E0417 00:07:55.925963 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.926019 kubelet[2735]: W0417 00:07:55.926003 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.926019 kubelet[2735]: E0417 00:07:55.926013 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.926377 kubelet[2735]: E0417 00:07:55.926354 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.926377 kubelet[2735]: W0417 00:07:55.926369 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.926377 kubelet[2735]: E0417 00:07:55.926377 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.926684 kubelet[2735]: E0417 00:07:55.926642 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.926719 kubelet[2735]: W0417 00:07:55.926706 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.926742 kubelet[2735]: E0417 00:07:55.926718 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.928261 kubelet[2735]: E0417 00:07:55.927111 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.928261 kubelet[2735]: W0417 00:07:55.927124 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.928261 kubelet[2735]: E0417 00:07:55.927133 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:55.928261 kubelet[2735]: E0417 00:07:55.927790 2735 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 00:07:55.928261 kubelet[2735]: W0417 00:07:55.927798 2735 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 00:07:55.928261 kubelet[2735]: E0417 00:07:55.927806 2735 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 00:07:56.372146 containerd[1554]: time="2026-04-17T00:07:56.372096391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:56.372853 containerd[1554]: time="2026-04-17T00:07:56.372822982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 00:07:56.374095 containerd[1554]: time="2026-04-17T00:07:56.373544124Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:56.376619 containerd[1554]: time="2026-04-17T00:07:56.376582348Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:07:56.377431 containerd[1554]: time="2026-04-17T00:07:56.377388908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 673.087434ms" Apr 17 00:07:56.377431 containerd[1554]: time="2026-04-17T00:07:56.377428917Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 00:07:56.381887 containerd[1554]: time="2026-04-17T00:07:56.381860945Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 00:07:56.392187 containerd[1554]: time="2026-04-17T00:07:56.391179824Z" level=info msg="Container 924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:07:56.408536 containerd[1554]: time="2026-04-17T00:07:56.408501928Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488\"" Apr 17 00:07:56.411200 containerd[1554]: time="2026-04-17T00:07:56.409140819Z" level=info msg="StartContainer for \"924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488\"" Apr 17 00:07:56.411586 containerd[1554]: time="2026-04-17T00:07:56.411525871Z" level=info msg="connecting to shim 924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488" address="unix:///run/containerd/s/1696f826a99804a8e1e029a121eba0f1764f0510351191dbc3833d135e91536a" protocol=ttrpc version=3 Apr 17 00:07:56.440458 systemd[1]: Started cri-containerd-924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488.scope - libcontainer container 924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488. Apr 17 00:07:56.507327 containerd[1554]: time="2026-04-17T00:07:56.507282822Z" level=info msg="StartContainer for \"924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488\" returns successfully" Apr 17 00:07:56.527608 systemd[1]: cri-containerd-924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488.scope: Deactivated successfully. Apr 17 00:07:56.530511 containerd[1554]: time="2026-04-17T00:07:56.530379297Z" level=info msg="received container exit event container_id:\"924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488\" id:\"924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488\" pid:3395 exited_at:{seconds:1776384476 nanos:529765024}" Apr 17 00:07:56.557948 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-924015e71ef8f0276b215959167237c8d0c07203a7ddcca050ed915ecc7bc488-rootfs.mount: Deactivated successfully. Apr 17 00:07:56.853419 kubelet[2735]: I0417 00:07:56.853391 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 00:07:56.854894 containerd[1554]: time="2026-04-17T00:07:56.854428960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 00:07:56.856443 kubelet[2735]: E0417 00:07:56.855861 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:07:57.773821 kubelet[2735]: E0417 00:07:57.773690 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glxlh" podUID="0d2c4fdb-3d93-490b-aa53-b22402e33fe4" Apr 17 00:07:59.774469 kubelet[2735]: E0417 00:07:59.774124 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glxlh" podUID="0d2c4fdb-3d93-490b-aa53-b22402e33fe4" Apr 17 00:08:00.580015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount380072286.mount: Deactivated successfully. Apr 17 00:08:00.609283 containerd[1554]: time="2026-04-17T00:08:00.609235606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:00.610069 containerd[1554]: time="2026-04-17T00:08:00.609946920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 00:08:00.610536 containerd[1554]: time="2026-04-17T00:08:00.610508705Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:00.612160 containerd[1554]: time="2026-04-17T00:08:00.612132621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:00.612770 containerd[1554]: time="2026-04-17T00:08:00.612747396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.758282346s" Apr 17 00:08:00.612841 containerd[1554]: time="2026-04-17T00:08:00.612826855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 00:08:00.616278 containerd[1554]: time="2026-04-17T00:08:00.616248385Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 00:08:00.625302 containerd[1554]: time="2026-04-17T00:08:00.624404352Z" level=info msg="Container 19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:00.635776 containerd[1554]: time="2026-04-17T00:08:00.635743442Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae\"" Apr 17 00:08:00.636334 containerd[1554]: time="2026-04-17T00:08:00.636308947Z" level=info msg="StartContainer for \"19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae\"" Apr 17 00:08:00.637740 containerd[1554]: time="2026-04-17T00:08:00.637669395Z" level=info msg="connecting to shim 19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae" address="unix:///run/containerd/s/1696f826a99804a8e1e029a121eba0f1764f0510351191dbc3833d135e91536a" protocol=ttrpc version=3 Apr 17 00:08:00.659192 systemd[1]: Started cri-containerd-19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae.scope - libcontainer container 19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae. Apr 17 00:08:00.724334 containerd[1554]: time="2026-04-17T00:08:00.724299468Z" level=info msg="StartContainer for \"19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae\" returns successfully" Apr 17 00:08:00.771794 systemd[1]: cri-containerd-19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae.scope: Deactivated successfully. Apr 17 00:08:00.774587 containerd[1554]: time="2026-04-17T00:08:00.774545763Z" level=info msg="received container exit event container_id:\"19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae\" id:\"19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae\" pid:3451 exited_at:{seconds:1776384480 nanos:774278986}" Apr 17 00:08:00.798705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19821c939a06636d59682281585b3be7de9eeac2ef4ec52f863d09f4284264ae-rootfs.mount: Deactivated successfully. Apr 17 00:08:01.773977 kubelet[2735]: E0417 00:08:01.773378 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glxlh" podUID="0d2c4fdb-3d93-490b-aa53-b22402e33fe4" Apr 17 00:08:01.884159 containerd[1554]: time="2026-04-17T00:08:01.884124481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 00:08:03.772804 kubelet[2735]: E0417 00:08:03.772489 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-glxlh" podUID="0d2c4fdb-3d93-490b-aa53-b22402e33fe4" Apr 17 00:08:03.841605 containerd[1554]: time="2026-04-17T00:08:03.841538162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:03.842650 containerd[1554]: time="2026-04-17T00:08:03.842579126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 00:08:03.845069 containerd[1554]: time="2026-04-17T00:08:03.843640629Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:03.846373 containerd[1554]: time="2026-04-17T00:08:03.846340760Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:03.847354 containerd[1554]: time="2026-04-17T00:08:03.847324772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1.962721965s" Apr 17 00:08:03.847481 containerd[1554]: time="2026-04-17T00:08:03.847457741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 00:08:03.852200 containerd[1554]: time="2026-04-17T00:08:03.852152428Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 00:08:03.862064 containerd[1554]: time="2026-04-17T00:08:03.861094566Z" level=info msg="Container ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:03.873602 containerd[1554]: time="2026-04-17T00:08:03.873325739Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738\"" Apr 17 00:08:03.875124 containerd[1554]: time="2026-04-17T00:08:03.875091877Z" level=info msg="StartContainer for \"ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738\"" Apr 17 00:08:03.877640 containerd[1554]: time="2026-04-17T00:08:03.877593269Z" level=info msg="connecting to shim ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738" address="unix:///run/containerd/s/1696f826a99804a8e1e029a121eba0f1764f0510351191dbc3833d135e91536a" protocol=ttrpc version=3 Apr 17 00:08:03.916253 systemd[1]: Started cri-containerd-ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738.scope - libcontainer container ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738. Apr 17 00:08:04.026166 containerd[1554]: time="2026-04-17T00:08:04.025784501Z" level=info msg="StartContainer for \"ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738\" returns successfully" Apr 17 00:08:04.663487 containerd[1554]: time="2026-04-17T00:08:04.663432228Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 00:08:04.666469 systemd[1]: cri-containerd-ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738.scope: Deactivated successfully. Apr 17 00:08:04.666800 systemd[1]: cri-containerd-ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738.scope: Consumed 660ms CPU time, 189.6M memory peak, 1.9M read from disk, 177M written to disk. Apr 17 00:08:04.667961 containerd[1554]: time="2026-04-17T00:08:04.667934369Z" level=info msg="received container exit event container_id:\"ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738\" id:\"ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738\" pid:3506 exited_at:{seconds:1776384484 nanos:667586911}" Apr 17 00:08:04.713940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ebf80d7488dc05930522f046a08df58ccf9e158169247763fdbe4256e8e27738-rootfs.mount: Deactivated successfully. Apr 17 00:08:04.737343 kubelet[2735]: I0417 00:08:04.737314 2735 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 00:08:04.776964 systemd[1]: Created slice kubepods-burstable-pod291fff59_a234_447d_bf37_bf2edb7a7686.slice - libcontainer container kubepods-burstable-pod291fff59_a234_447d_bf37_bf2edb7a7686.slice. Apr 17 00:08:04.789447 systemd[1]: Created slice kubepods-besteffort-podb8ee9421_114f_42e6_8b33_b6b49acbf949.slice - libcontainer container kubepods-besteffort-podb8ee9421_114f_42e6_8b33_b6b49acbf949.slice. Apr 17 00:08:04.812627 systemd[1]: Created slice kubepods-besteffort-pod3ff4324b_f73c_4867_bebd_0f2f3d60a9ae.slice - libcontainer container kubepods-besteffort-pod3ff4324b_f73c_4867_bebd_0f2f3d60a9ae.slice. Apr 17 00:08:04.822076 systemd[1]: Created slice kubepods-besteffort-poda6b510cb_eace_469e_8840_ce52365e8af1.slice - libcontainer container kubepods-besteffort-poda6b510cb_eace_469e_8840_ce52365e8af1.slice. Apr 17 00:08:04.831265 systemd[1]: Created slice kubepods-besteffort-pod70b0eb97_2013_448c_8115_21c3dc1415a1.slice - libcontainer container kubepods-besteffort-pod70b0eb97_2013_448c_8115_21c3dc1415a1.slice. Apr 17 00:08:04.839439 systemd[1]: Created slice kubepods-burstable-podb3226986_1113_4b4e_90f5_4d61e0f410e9.slice - libcontainer container kubepods-burstable-podb3226986_1113_4b4e_90f5_4d61e0f410e9.slice. Apr 17 00:08:04.846216 systemd[1]: Created slice kubepods-besteffort-poddd6be1ad_8a91_4e2c_b9de_0116fc64a64f.slice - libcontainer container kubepods-besteffort-poddd6be1ad_8a91_4e2c_b9de_0116fc64a64f.slice. Apr 17 00:08:04.881079 kubelet[2735]: I0417 00:08:04.881019 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70b0eb97-2013-448c-8115-21c3dc1415a1-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-j6qwc\" (UID: \"70b0eb97-2013-448c-8115-21c3dc1415a1\") " pod="calico-system/goldmane-5b85766d88-j6qwc" Apr 17 00:08:04.881644 kubelet[2735]: I0417 00:08:04.881580 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzz69\" (UniqueName: \"kubernetes.io/projected/291fff59-a234-447d-bf37-bf2edb7a7686-kube-api-access-qzz69\") pod \"coredns-674b8bbfcf-x6xmx\" (UID: \"291fff59-a234-447d-bf37-bf2edb7a7686\") " pod="kube-system/coredns-674b8bbfcf-x6xmx" Apr 17 00:08:04.881644 kubelet[2735]: I0417 00:08:04.881609 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b3226986-1113-4b4e-90f5-4d61e0f410e9-config-volume\") pod \"coredns-674b8bbfcf-pcn6t\" (UID: \"b3226986-1113-4b4e-90f5-4d61e0f410e9\") " pod="kube-system/coredns-674b8bbfcf-pcn6t" Apr 17 00:08:04.881809 kubelet[2735]: I0417 00:08:04.881627 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fktdg\" (UniqueName: \"kubernetes.io/projected/dd6be1ad-8a91-4e2c-b9de-0116fc64a64f-kube-api-access-fktdg\") pod \"calico-apiserver-56f7dfb777-lbd8z\" (UID: \"dd6be1ad-8a91-4e2c-b9de-0116fc64a64f\") " pod="calico-system/calico-apiserver-56f7dfb777-lbd8z" Apr 17 00:08:04.881809 kubelet[2735]: I0417 00:08:04.881758 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-backend-key-pair\") pod \"whisker-776bdf9464-98fst\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " pod="calico-system/whisker-776bdf9464-98fst" Apr 17 00:08:04.881809 kubelet[2735]: I0417 00:08:04.881779 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/291fff59-a234-447d-bf37-bf2edb7a7686-config-volume\") pod \"coredns-674b8bbfcf-x6xmx\" (UID: \"291fff59-a234-447d-bf37-bf2edb7a7686\") " pod="kube-system/coredns-674b8bbfcf-x6xmx" Apr 17 00:08:04.881979 kubelet[2735]: I0417 00:08:04.881933 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ff4324b-f73c-4867-bebd-0f2f3d60a9ae-tigera-ca-bundle\") pod \"calico-kube-controllers-7857d958f-t7c9t\" (UID: \"3ff4324b-f73c-4867-bebd-0f2f3d60a9ae\") " pod="calico-system/calico-kube-controllers-7857d958f-t7c9t" Apr 17 00:08:04.881979 kubelet[2735]: I0417 00:08:04.881956 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hjp2\" (UniqueName: \"kubernetes.io/projected/3ff4324b-f73c-4867-bebd-0f2f3d60a9ae-kube-api-access-4hjp2\") pod \"calico-kube-controllers-7857d958f-t7c9t\" (UID: \"3ff4324b-f73c-4867-bebd-0f2f3d60a9ae\") " pod="calico-system/calico-kube-controllers-7857d958f-t7c9t" Apr 17 00:08:04.882117 kubelet[2735]: I0417 00:08:04.882102 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-nginx-config\") pod \"whisker-776bdf9464-98fst\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " pod="calico-system/whisker-776bdf9464-98fst" Apr 17 00:08:04.882256 kubelet[2735]: I0417 00:08:04.882170 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f4cn\" (UniqueName: \"kubernetes.io/projected/a6b510cb-eace-469e-8840-ce52365e8af1-kube-api-access-7f4cn\") pod \"whisker-776bdf9464-98fst\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " pod="calico-system/whisker-776bdf9464-98fst" Apr 17 00:08:04.882256 kubelet[2735]: I0417 00:08:04.882190 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b8ee9421-114f-42e6-8b33-b6b49acbf949-calico-apiserver-certs\") pod \"calico-apiserver-56f7dfb777-nwsb6\" (UID: \"b8ee9421-114f-42e6-8b33-b6b49acbf949\") " pod="calico-system/calico-apiserver-56f7dfb777-nwsb6" Apr 17 00:08:04.882256 kubelet[2735]: I0417 00:08:04.882206 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj6hs\" (UniqueName: \"kubernetes.io/projected/b3226986-1113-4b4e-90f5-4d61e0f410e9-kube-api-access-zj6hs\") pod \"coredns-674b8bbfcf-pcn6t\" (UID: \"b3226986-1113-4b4e-90f5-4d61e0f410e9\") " pod="kube-system/coredns-674b8bbfcf-pcn6t" Apr 17 00:08:04.882427 kubelet[2735]: I0417 00:08:04.882390 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dd6be1ad-8a91-4e2c-b9de-0116fc64a64f-calico-apiserver-certs\") pod \"calico-apiserver-56f7dfb777-lbd8z\" (UID: \"dd6be1ad-8a91-4e2c-b9de-0116fc64a64f\") " pod="calico-system/calico-apiserver-56f7dfb777-lbd8z" Apr 17 00:08:04.882560 kubelet[2735]: I0417 00:08:04.882516 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr8sf\" (UniqueName: \"kubernetes.io/projected/b8ee9421-114f-42e6-8b33-b6b49acbf949-kube-api-access-lr8sf\") pod \"calico-apiserver-56f7dfb777-nwsb6\" (UID: \"b8ee9421-114f-42e6-8b33-b6b49acbf949\") " pod="calico-system/calico-apiserver-56f7dfb777-nwsb6" Apr 17 00:08:04.882560 kubelet[2735]: I0417 00:08:04.882537 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70b0eb97-2013-448c-8115-21c3dc1415a1-config\") pod \"goldmane-5b85766d88-j6qwc\" (UID: \"70b0eb97-2013-448c-8115-21c3dc1415a1\") " pod="calico-system/goldmane-5b85766d88-j6qwc" Apr 17 00:08:04.882670 kubelet[2735]: I0417 00:08:04.882657 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/70b0eb97-2013-448c-8115-21c3dc1415a1-goldmane-key-pair\") pod \"goldmane-5b85766d88-j6qwc\" (UID: \"70b0eb97-2013-448c-8115-21c3dc1415a1\") " pod="calico-system/goldmane-5b85766d88-j6qwc" Apr 17 00:08:04.882843 kubelet[2735]: I0417 00:08:04.882768 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkgwl\" (UniqueName: \"kubernetes.io/projected/70b0eb97-2013-448c-8115-21c3dc1415a1-kube-api-access-gkgwl\") pod \"goldmane-5b85766d88-j6qwc\" (UID: \"70b0eb97-2013-448c-8115-21c3dc1415a1\") " pod="calico-system/goldmane-5b85766d88-j6qwc" Apr 17 00:08:04.882843 kubelet[2735]: I0417 00:08:04.882802 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-ca-bundle\") pod \"whisker-776bdf9464-98fst\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " pod="calico-system/whisker-776bdf9464-98fst" Apr 17 00:08:04.929448 containerd[1554]: time="2026-04-17T00:08:04.929353760Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 00:08:04.939346 containerd[1554]: time="2026-04-17T00:08:04.936878251Z" level=info msg="Container 6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:04.946444 containerd[1554]: time="2026-04-17T00:08:04.946408160Z" level=info msg="CreateContainer within sandbox \"862dcadea896312c9fb41a871fd3ea2d195d423626dbd257b13a271336cf67eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699\"" Apr 17 00:08:04.946851 containerd[1554]: time="2026-04-17T00:08:04.946832396Z" level=info msg="StartContainer for \"6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699\"" Apr 17 00:08:04.948473 containerd[1554]: time="2026-04-17T00:08:04.948432206Z" level=info msg="connecting to shim 6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699" address="unix:///run/containerd/s/1696f826a99804a8e1e029a121eba0f1764f0510351191dbc3833d135e91536a" protocol=ttrpc version=3 Apr 17 00:08:04.975237 systemd[1]: Started cri-containerd-6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699.scope - libcontainer container 6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699. Apr 17 00:08:05.084518 kubelet[2735]: E0417 00:08:05.084265 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:05.086996 containerd[1554]: time="2026-04-17T00:08:05.086930239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6xmx,Uid:291fff59-a234-447d-bf37-bf2edb7a7686,Namespace:kube-system,Attempt:0,}" Apr 17 00:08:05.100435 containerd[1554]: time="2026-04-17T00:08:05.100404399Z" level=info msg="StartContainer for \"6e141755e41236533cfff88205cf22c0a9182606f8de18f3f95fc55526356699\" returns successfully" Apr 17 00:08:05.100944 containerd[1554]: time="2026-04-17T00:08:05.100882825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f7dfb777-nwsb6,Uid:b8ee9421-114f-42e6-8b33-b6b49acbf949,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.130543 containerd[1554]: time="2026-04-17T00:08:05.129962401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d958f-t7c9t,Uid:3ff4324b-f73c-4867-bebd-0f2f3d60a9ae,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.131649 containerd[1554]: time="2026-04-17T00:08:05.131406093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776bdf9464-98fst,Uid:a6b510cb-eace-469e-8840-ce52365e8af1,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.136715 containerd[1554]: time="2026-04-17T00:08:05.136426333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-j6qwc,Uid:70b0eb97-2013-448c-8115-21c3dc1415a1,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.144716 kubelet[2735]: E0417 00:08:05.144542 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:05.147258 containerd[1554]: time="2026-04-17T00:08:05.147230218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcn6t,Uid:b3226986-1113-4b4e-90f5-4d61e0f410e9,Namespace:kube-system,Attempt:0,}" Apr 17 00:08:05.153711 containerd[1554]: time="2026-04-17T00:08:05.153673689Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f7dfb777-lbd8z,Uid:dd6be1ad-8a91-4e2c-b9de-0116fc64a64f,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.520 [INFO][3696] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.520 [INFO][3696] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" iface="eth0" netns="/var/run/netns/cni-84faa5ba-2a98-32c5-a8ba-3b36b17ed5fb" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.522 [INFO][3696] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" iface="eth0" netns="/var/run/netns/cni-84faa5ba-2a98-32c5-a8ba-3b36b17ed5fb" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.522 [INFO][3696] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" iface="eth0" netns="/var/run/netns/cni-84faa5ba-2a98-32c5-a8ba-3b36b17ed5fb" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.522 [INFO][3696] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.522 [INFO][3696] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.635 [INFO][3762] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" HandleID="k8s-pod-network.805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.638 [INFO][3762] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.700074 containerd[1554]: 2026-04-17 00:08:05.639 [INFO][3762] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.700378 containerd[1554]: 2026-04-17 00:08:05.668 [WARNING][3762] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" HandleID="k8s-pod-network.805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:05.700378 containerd[1554]: 2026-04-17 00:08:05.668 [INFO][3762] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" HandleID="k8s-pod-network.805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:05.700378 containerd[1554]: 2026-04-17 00:08:05.674 [INFO][3762] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.700378 containerd[1554]: 2026-04-17 00:08:05.689 [INFO][3696] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92" Apr 17 00:08:05.703238 containerd[1554]: time="2026-04-17T00:08:05.700769319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f7dfb777-nwsb6,Uid:b8ee9421-114f-42e6-8b33-b6b49acbf949,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.703397 kubelet[2735]: E0417 00:08:05.701304 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.703397 kubelet[2735]: E0417 00:08:05.701495 2735 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-56f7dfb777-nwsb6" Apr 17 00:08:05.703397 kubelet[2735]: E0417 00:08:05.701542 2735 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-56f7dfb777-nwsb6" Apr 17 00:08:05.703501 kubelet[2735]: E0417 00:08:05.701973 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56f7dfb777-nwsb6_calico-system(b8ee9421-114f-42e6-8b33-b6b49acbf949)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56f7dfb777-nwsb6_calico-system(b8ee9421-114f-42e6-8b33-b6b49acbf949)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"805b1c658ba245130bf788dc8872adf330a1d644d5464b4d97b80477eb0bfa92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-56f7dfb777-nwsb6" podUID="b8ee9421-114f-42e6-8b33-b6b49acbf949" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.564 [INFO][3738] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.564 [INFO][3738] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" iface="eth0" netns="/var/run/netns/cni-8c9d3dc6-665e-a2d7-2130-a17d3d61ee6e" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.569 [INFO][3738] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" iface="eth0" netns="/var/run/netns/cni-8c9d3dc6-665e-a2d7-2130-a17d3d61ee6e" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.572 [INFO][3738] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" iface="eth0" netns="/var/run/netns/cni-8c9d3dc6-665e-a2d7-2130-a17d3d61ee6e" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.572 [INFO][3738] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.572 [INFO][3738] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.694 [INFO][3774] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" HandleID="k8s-pod-network.666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.694 [INFO][3774] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.737777 containerd[1554]: 2026-04-17 00:08:05.694 [INFO][3774] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.738488 containerd[1554]: 2026-04-17 00:08:05.715 [WARNING][3774] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" HandleID="k8s-pod-network.666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:05.738488 containerd[1554]: 2026-04-17 00:08:05.715 [INFO][3774] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" HandleID="k8s-pod-network.666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:05.738488 containerd[1554]: 2026-04-17 00:08:05.718 [INFO][3774] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.738488 containerd[1554]: 2026-04-17 00:08:05.726 [INFO][3738] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516" Apr 17 00:08:05.743824 containerd[1554]: time="2026-04-17T00:08:05.743781191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcn6t,Uid:b3226986-1113-4b4e-90f5-4d61e0f410e9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.744370 kubelet[2735]: E0417 00:08:05.744296 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.744370 kubelet[2735]: E0417 00:08:05.744357 2735 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pcn6t" Apr 17 00:08:05.744595 kubelet[2735]: E0417 00:08:05.744377 2735 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pcn6t" Apr 17 00:08:05.744595 kubelet[2735]: E0417 00:08:05.744416 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pcn6t_kube-system(b3226986-1113-4b4e-90f5-4d61e0f410e9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pcn6t_kube-system(b3226986-1113-4b4e-90f5-4d61e0f410e9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"666e4b96b8957b8135de17add2ef5b7bcdb8b102a253085afd49ad0cbb21e516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pcn6t" podUID="b3226986-1113-4b4e-90f5-4d61e0f410e9" Apr 17 00:08:05.790309 systemd[1]: Created slice kubepods-besteffort-pod0d2c4fdb_3d93_490b_aa53_b22402e33fe4.slice - libcontainer container kubepods-besteffort-pod0d2c4fdb_3d93_490b_aa53_b22402e33fe4.slice. Apr 17 00:08:05.797265 containerd[1554]: time="2026-04-17T00:08:05.797008311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glxlh,Uid:0d2c4fdb-3d93-490b-aa53-b22402e33fe4,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.844490 systemd-networkd[1449]: cali01b2295b3d8: Link UP Apr 17 00:08:05.846870 systemd-networkd[1449]: cali01b2295b3d8: Gained carrier Apr 17 00:08:05.896494 containerd[1554]: 2026-04-17 00:08:05.433 [ERROR][3644] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:05.896494 containerd[1554]: 2026-04-17 00:08:05.492 [INFO][3644] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0 calico-apiserver-56f7dfb777- calico-system dd6be1ad-8a91-4e2c-b9de-0116fc64a64f 837 0 2026-04-17 00:07:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f7dfb777 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-171-230 calico-apiserver-56f7dfb777-lbd8z eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali01b2295b3d8 [] [] }} ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-" Apr 17 00:08:05.896494 containerd[1554]: 2026-04-17 00:08:05.492 [INFO][3644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.896494 containerd[1554]: 2026-04-17 00:08:05.705 [INFO][3757] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" HandleID="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.716 [INFO][3757] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" HandleID="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374330), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-230", "pod":"calico-apiserver-56f7dfb777-lbd8z", "timestamp":"2026-04-17 00:08:05.705984478 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000570580)} Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.716 [INFO][3757] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.719 [INFO][3757] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.719 [INFO][3757] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.730 [INFO][3757] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" host="172-238-171-230" Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.739 [INFO][3757] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.757 [INFO][3757] ipam/ipam.go 558: Ran out of existing affine blocks for host host="172-238-171-230" Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.762 [INFO][3757] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="172-238-171-230" Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.767 [INFO][3757] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.12.64/26 Apr 17 00:08:05.896702 containerd[1554]: 2026-04-17 00:08:05.767 [INFO][3757] ipam/ipam.go 588: Found unclaimed block in 5.138759ms host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.767 [INFO][3757] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.780 [INFO][3757] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.780 [INFO][3757] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.794 [INFO][3757] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.799 [INFO][3757] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.801 [INFO][3757] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.801 [INFO][3757] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.805 [INFO][3757] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.805 [INFO][3757] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.810 [INFO][3757] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.810 [INFO][3757] ipam/ipam.go 623: Block '192.168.12.64/26' has 64 free ips which is more than 1 ips required. host="172-238-171-230" subnet=192.168.12.64/26 Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.810 [INFO][3757] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" host="172-238-171-230" Apr 17 00:08:05.896919 containerd[1554]: 2026-04-17 00:08:05.812 [INFO][3757] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc Apr 17 00:08:05.897851 containerd[1554]: 2026-04-17 00:08:05.816 [INFO][3757] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" host="172-238-171-230" Apr 17 00:08:05.897851 containerd[1554]: 2026-04-17 00:08:05.822 [INFO][3757] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.64/26] block=192.168.12.64/26 handle="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" host="172-238-171-230" Apr 17 00:08:05.897851 containerd[1554]: 2026-04-17 00:08:05.823 [INFO][3757] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.64/26] handle="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" host="172-238-171-230" Apr 17 00:08:05.897851 containerd[1554]: 2026-04-17 00:08:05.823 [INFO][3757] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.897851 containerd[1554]: 2026-04-17 00:08:05.823 [INFO][3757] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.64/26] IPv6=[] ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" HandleID="k8s-pod-network.8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.897990 containerd[1554]: 2026-04-17 00:08:05.828 [INFO][3644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0", GenerateName:"calico-apiserver-56f7dfb777-", Namespace:"calico-system", SelfLink:"", UID:"dd6be1ad-8a91-4e2c-b9de-0116fc64a64f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f7dfb777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"calico-apiserver-56f7dfb777-lbd8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01b2295b3d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:05.898058 containerd[1554]: 2026-04-17 00:08:05.829 [INFO][3644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.64/32] ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.898058 containerd[1554]: 2026-04-17 00:08:05.830 [INFO][3644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali01b2295b3d8 ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.898058 containerd[1554]: 2026-04-17 00:08:05.844 [INFO][3644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.898120 containerd[1554]: 2026-04-17 00:08:05.845 [INFO][3644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0", GenerateName:"calico-apiserver-56f7dfb777-", Namespace:"calico-system", SelfLink:"", UID:"dd6be1ad-8a91-4e2c-b9de-0116fc64a64f", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f7dfb777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc", Pod:"calico-apiserver-56f7dfb777-lbd8z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.64/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali01b2295b3d8", MAC:"fa:b9:dd:ba:52:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:05.898171 containerd[1554]: 2026-04-17 00:08:05.864 [INFO][3644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-lbd8z" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--lbd8z-eth0" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.568 [INFO][3676] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.568 [INFO][3676] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" iface="eth0" netns="/var/run/netns/cni-26e9b080-3bf1-a4a6-89bb-8dd2562c813d" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.569 [INFO][3676] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" iface="eth0" netns="/var/run/netns/cni-26e9b080-3bf1-a4a6-89bb-8dd2562c813d" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.570 [INFO][3676] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" iface="eth0" netns="/var/run/netns/cni-26e9b080-3bf1-a4a6-89bb-8dd2562c813d" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.570 [INFO][3676] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.570 [INFO][3676] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.758 [INFO][3772] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" HandleID="k8s-pod-network.5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Workload="172--238--171--230-k8s-whisker--776bdf9464--98fst-eth0" Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.758 [INFO][3772] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.910874 containerd[1554]: 2026-04-17 00:08:05.823 [INFO][3772] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.911278 containerd[1554]: 2026-04-17 00:08:05.827 [WARNING][3772] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" HandleID="k8s-pod-network.5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Workload="172--238--171--230-k8s-whisker--776bdf9464--98fst-eth0" Apr 17 00:08:05.911278 containerd[1554]: 2026-04-17 00:08:05.828 [INFO][3772] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" HandleID="k8s-pod-network.5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Workload="172--238--171--230-k8s-whisker--776bdf9464--98fst-eth0" Apr 17 00:08:05.911278 containerd[1554]: 2026-04-17 00:08:05.829 [INFO][3772] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.911278 containerd[1554]: 2026-04-17 00:08:05.851 [INFO][3676] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.589 [INFO][3694] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.589 [INFO][3694] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" iface="eth0" netns="/var/run/netns/cni-ee69be30-f6d7-acfe-b1d3-f161e4d97fca" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.589 [INFO][3694] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" iface="eth0" netns="/var/run/netns/cni-ee69be30-f6d7-acfe-b1d3-f161e4d97fca" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.590 [INFO][3694] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" iface="eth0" netns="/var/run/netns/cni-ee69be30-f6d7-acfe-b1d3-f161e4d97fca" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.590 [INFO][3694] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.590 [INFO][3694] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.772 [INFO][3779] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" HandleID="k8s-pod-network.13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.773 [INFO][3779] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.936508 containerd[1554]: 2026-04-17 00:08:05.832 [INFO][3779] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.938665 containerd[1554]: 2026-04-17 00:08:05.854 [WARNING][3779] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" HandleID="k8s-pod-network.13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:05.938665 containerd[1554]: 2026-04-17 00:08:05.854 [INFO][3779] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" HandleID="k8s-pod-network.13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:05.938665 containerd[1554]: 2026-04-17 00:08:05.859 [INFO][3779] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.938665 containerd[1554]: 2026-04-17 00:08:05.890 [INFO][3694] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9" Apr 17 00:08:05.944170 containerd[1554]: time="2026-04-17T00:08:05.944107970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-776bdf9464-98fst,Uid:a6b510cb-eace-469e-8840-ce52365e8af1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.945422 kubelet[2735]: E0417 00:08:05.945325 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.946964 kubelet[2735]: E0417 00:08:05.945723 2735 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dcdfab269ee097988d8b3fb0fbe3a27bf59bb74f0293b285ecf44ffafdb10dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-776bdf9464-98fst" Apr 17 00:08:05.957360 systemd[1]: run-netns-cni\x2d26e9b080\x2d3bf1\x2da4a6\x2d89bb\x2d8dd2562c813d.mount: Deactivated successfully. Apr 17 00:08:05.957488 systemd[1]: run-netns-cni\x2dee69be30\x2df6d7\x2dacfe\x2db1d3\x2df161e4d97fca.mount: Deactivated successfully. Apr 17 00:08:05.965265 containerd[1554]: time="2026-04-17T00:08:05.965229363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6xmx,Uid:291fff59-a234-447d-bf37-bf2edb7a7686,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.967377 kubelet[2735]: E0417 00:08:05.967344 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.967831 kubelet[2735]: E0417 00:08:05.967687 2735 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x6xmx" Apr 17 00:08:05.967831 kubelet[2735]: E0417 00:08:05.967714 2735 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-x6xmx" Apr 17 00:08:05.971736 kubelet[2735]: E0417 00:08:05.968452 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-x6xmx_kube-system(291fff59-a234-447d-bf37-bf2edb7a7686)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-x6xmx_kube-system(291fff59-a234-447d-bf37-bf2edb7a7686)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13fb8828e47ac5481272d72c82ec2f4e23835133a522b846879cbb60c7239de9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-x6xmx" podUID="291fff59-a234-447d-bf37-bf2edb7a7686" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.640 [INFO][3721] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.640 [INFO][3721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" iface="eth0" netns="/var/run/netns/cni-8d26201b-c2dd-beb8-aa48-3bbf411dd766" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.641 [INFO][3721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" iface="eth0" netns="/var/run/netns/cni-8d26201b-c2dd-beb8-aa48-3bbf411dd766" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.642 [INFO][3721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" iface="eth0" netns="/var/run/netns/cni-8d26201b-c2dd-beb8-aa48-3bbf411dd766" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.642 [INFO][3721] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.642 [INFO][3721] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.802 [INFO][3792] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" HandleID="k8s-pod-network.163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Workload="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.803 [INFO][3792] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.971923 containerd[1554]: 2026-04-17 00:08:05.859 [INFO][3792] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.972157 containerd[1554]: 2026-04-17 00:08:05.891 [WARNING][3792] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" HandleID="k8s-pod-network.163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Workload="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:05.972157 containerd[1554]: 2026-04-17 00:08:05.891 [INFO][3792] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" HandleID="k8s-pod-network.163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Workload="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:05.972157 containerd[1554]: 2026-04-17 00:08:05.900 [INFO][3792] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.972157 containerd[1554]: 2026-04-17 00:08:05.964 [INFO][3721] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c" Apr 17 00:08:05.977090 systemd[1]: run-netns-cni\x2d8d26201b\x2dc2dd\x2dbeb8\x2daa48\x2d3bbf411dd766.mount: Deactivated successfully. Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.707 [INFO][3737] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.707 [INFO][3737] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" iface="eth0" netns="/var/run/netns/cni-5214bc15-a0dd-34c0-1a6f-77a31f869550" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.708 [INFO][3737] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" iface="eth0" netns="/var/run/netns/cni-5214bc15-a0dd-34c0-1a6f-77a31f869550" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.708 [INFO][3737] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" iface="eth0" netns="/var/run/netns/cni-5214bc15-a0dd-34c0-1a6f-77a31f869550" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.708 [INFO][3737] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.708 [INFO][3737] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.815 [INFO][3801] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" HandleID="k8s-pod-network.0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Workload="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.815 [INFO][3801] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:05.978751 containerd[1554]: 2026-04-17 00:08:05.898 [INFO][3801] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:05.978969 containerd[1554]: 2026-04-17 00:08:05.945 [WARNING][3801] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" HandleID="k8s-pod-network.0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Workload="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:05.978969 containerd[1554]: 2026-04-17 00:08:05.945 [INFO][3801] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" HandleID="k8s-pod-network.0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Workload="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:05.978969 containerd[1554]: 2026-04-17 00:08:05.962 [INFO][3801] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:05.978969 containerd[1554]: 2026-04-17 00:08:05.967 [INFO][3737] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855" Apr 17 00:08:05.979405 containerd[1554]: time="2026-04-17T00:08:05.979217249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-j6qwc,Uid:70b0eb97-2013-448c-8115-21c3dc1415a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.981094 kubelet[2735]: E0417 00:08:05.981074 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:05.986068 kubelet[2735]: E0417 00:08:05.983470 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:05.986068 kubelet[2735]: E0417 00:08:05.983510 2735 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-j6qwc" Apr 17 00:08:05.986068 kubelet[2735]: E0417 00:08:05.983527 2735 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-j6qwc" Apr 17 00:08:05.985708 systemd[1]: run-netns-cni\x2d5214bc15\x2da0dd\x2d34c0\x2d1a6f\x2d77a31f869550.mount: Deactivated successfully. Apr 17 00:08:05.986222 containerd[1554]: time="2026-04-17T00:08:05.983975981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f7dfb777-nwsb6,Uid:b8ee9421-114f-42e6-8b33-b6b49acbf949,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:05.986249 kubelet[2735]: E0417 00:08:05.983556 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-j6qwc_calico-system(70b0eb97-2013-448c-8115-21c3dc1415a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-j6qwc_calico-system(70b0eb97-2013-448c-8115-21c3dc1415a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"163084762490118b53565cc32ca68ce60a32f4823b8f8f17a499e83fa26d2d2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-j6qwc" podUID="70b0eb97-2013-448c-8115-21c3dc1415a1" Apr 17 00:08:05.990073 containerd[1554]: time="2026-04-17T00:08:05.988108306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcn6t,Uid:b3226986-1113-4b4e-90f5-4d61e0f410e9,Namespace:kube-system,Attempt:0,}" Apr 17 00:08:05.997612 containerd[1554]: time="2026-04-17T00:08:05.997559709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d958f-t7c9t,Uid:3ff4324b-f73c-4867-bebd-0f2f3d60a9ae,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:06.003350 kubelet[2735]: E0417 00:08:06.003203 2735 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 00:08:06.003350 kubelet[2735]: E0417 00:08:06.003251 2735 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7857d958f-t7c9t" Apr 17 00:08:06.003350 kubelet[2735]: E0417 00:08:06.003268 2735 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7857d958f-t7c9t" Apr 17 00:08:06.003468 kubelet[2735]: E0417 00:08:06.003305 2735 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7857d958f-t7c9t_calico-system(3ff4324b-f73c-4867-bebd-0f2f3d60a9ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7857d958f-t7c9t_calico-system(3ff4324b-f73c-4867-bebd-0f2f3d60a9ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b1d39397ae5074164c1e5d80373bcfad87a83143f94cb52e62d761ffa5e5855\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7857d958f-t7c9t" podUID="3ff4324b-f73c-4867-bebd-0f2f3d60a9ae" Apr 17 00:08:06.026564 kubelet[2735]: I0417 00:08:06.026257 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-f4jt7" podStartSLOduration=2.690189004 podStartE2EDuration="12.026242068s" podCreationTimestamp="2026-04-17 00:07:54 +0000 UTC" firstStartedPulling="2026-04-17 00:07:54.512329021 +0000 UTC m=+18.837497773" lastFinishedPulling="2026-04-17 00:08:03.848382085 +0000 UTC m=+28.173550837" observedRunningTime="2026-04-17 00:08:06.023975931 +0000 UTC m=+30.349144683" watchObservedRunningTime="2026-04-17 00:08:06.026242068 +0000 UTC m=+30.351410820" Apr 17 00:08:06.049556 containerd[1554]: time="2026-04-17T00:08:06.049453330Z" level=info msg="connecting to shim 8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc" address="unix:///run/containerd/s/f2848db29f7048358a08d9e611691ffc72b8e28b3d38cbb86ac5c403572c2a23" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:06.103399 systemd[1]: Started cri-containerd-8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc.scope - libcontainer container 8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc. Apr 17 00:08:06.200546 systemd-networkd[1449]: cali315bf0e3f7c: Link UP Apr 17 00:08:06.206845 systemd-networkd[1449]: cali315bf0e3f7c: Gained carrier Apr 17 00:08:06.224107 containerd[1554]: 2026-04-17 00:08:05.951 [ERROR][3812] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:06.224107 containerd[1554]: 2026-04-17 00:08:06.019 [INFO][3812] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-csi--node--driver--glxlh-eth0 csi-node-driver- calico-system 0d2c4fdb-3d93-490b-aa53-b22402e33fe4 718 0 2026-04-17 00:07:54 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-238-171-230 csi-node-driver-glxlh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali315bf0e3f7c [] [] }} ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-" Apr 17 00:08:06.224107 containerd[1554]: 2026-04-17 00:08:06.020 [INFO][3812] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.224107 containerd[1554]: 2026-04-17 00:08:06.130 [INFO][3864] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" HandleID="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Workload="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.139 [INFO][3864] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" HandleID="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Workload="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004100a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-230", "pod":"csi-node-driver-glxlh", "timestamp":"2026-04-17 00:08:06.130152014 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002cedc0)} Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.140 [INFO][3864] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.140 [INFO][3864] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.140 [INFO][3864] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.143 [INFO][3864] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" host="172-238-171-230" Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.149 [INFO][3864] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.161 [INFO][3864] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.166 [INFO][3864] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.224347 containerd[1554]: 2026-04-17 00:08:06.172 [INFO][3864] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.172 [INFO][3864] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" host="172-238-171-230" Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.176 [INFO][3864] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4 Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.179 [INFO][3864] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" host="172-238-171-230" Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.184 [INFO][3864] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.66/26] block=192.168.12.64/26 handle="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" host="172-238-171-230" Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.184 [INFO][3864] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.66/26] handle="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" host="172-238-171-230" Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.185 [INFO][3864] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:06.224524 containerd[1554]: 2026-04-17 00:08:06.185 [INFO][3864] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.66/26] IPv6=[] ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" HandleID="k8s-pod-network.a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Workload="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.224655 containerd[1554]: 2026-04-17 00:08:06.191 [INFO][3812] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-csi--node--driver--glxlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d2c4fdb-3d93-490b-aa53-b22402e33fe4", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"csi-node-driver-glxlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali315bf0e3f7c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:06.224708 containerd[1554]: 2026-04-17 00:08:06.191 [INFO][3812] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.66/32] ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.224708 containerd[1554]: 2026-04-17 00:08:06.191 [INFO][3812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali315bf0e3f7c ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.224708 containerd[1554]: 2026-04-17 00:08:06.203 [INFO][3812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.224760 containerd[1554]: 2026-04-17 00:08:06.210 [INFO][3812] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-csi--node--driver--glxlh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d2c4fdb-3d93-490b-aa53-b22402e33fe4", ResourceVersion:"718", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4", Pod:"csi-node-driver-glxlh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.12.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali315bf0e3f7c", MAC:"f2:d2:10:2e:d2:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:06.224812 containerd[1554]: 2026-04-17 00:08:06.218 [INFO][3812] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" Namespace="calico-system" Pod="csi-node-driver-glxlh" WorkloadEndpoint="172--238--171--230-k8s-csi--node--driver--glxlh-eth0" Apr 17 00:08:06.233490 containerd[1554]: time="2026-04-17T00:08:06.233440443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f7dfb777-lbd8z,Uid:dd6be1ad-8a91-4e2c-b9de-0116fc64a64f,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc\"" Apr 17 00:08:06.236675 containerd[1554]: time="2026-04-17T00:08:06.236642886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 00:08:06.278321 containerd[1554]: time="2026-04-17T00:08:06.278272945Z" level=info msg="connecting to shim a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4" address="unix:///run/containerd/s/26daf171672e365530888cfa75c037da9241792fe04254f12c4c14d8e3cc8571" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:06.297419 systemd-networkd[1449]: calia032dfc6093: Link UP Apr 17 00:08:06.298649 systemd-networkd[1449]: calia032dfc6093: Gained carrier Apr 17 00:08:06.318355 systemd[1]: Started cri-containerd-a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4.scope - libcontainer container a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4. Apr 17 00:08:06.322950 containerd[1554]: 2026-04-17 00:08:06.073 [ERROR][3846] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:06.322950 containerd[1554]: 2026-04-17 00:08:06.096 [INFO][3846] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0 calico-apiserver-56f7dfb777- calico-system b8ee9421-114f-42e6-8b33-b6b49acbf949 859 0 2026-04-17 00:07:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f7dfb777 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-238-171-230 calico-apiserver-56f7dfb777-nwsb6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia032dfc6093 [] [] }} ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-" Apr 17 00:08:06.322950 containerd[1554]: 2026-04-17 00:08:06.097 [INFO][3846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.322950 containerd[1554]: 2026-04-17 00:08:06.153 [INFO][3906] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" HandleID="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.168 [INFO][3906] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" HandleID="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdd60), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-230", "pod":"calico-apiserver-56f7dfb777-nwsb6", "timestamp":"2026-04-17 00:08:06.153840674 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000221600)} Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.168 [INFO][3906] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.185 [INFO][3906] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.185 [INFO][3906] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.245 [INFO][3906] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" host="172-238-171-230" Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.255 [INFO][3906] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.264 [INFO][3906] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.266 [INFO][3906] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.323135 containerd[1554]: 2026-04-17 00:08:06.269 [INFO][3906] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.269 [INFO][3906] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" host="172-238-171-230" Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.273 [INFO][3906] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.279 [INFO][3906] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" host="172-238-171-230" Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.287 [INFO][3906] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.67/26] block=192.168.12.64/26 handle="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" host="172-238-171-230" Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.288 [INFO][3906] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.67/26] handle="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" host="172-238-171-230" Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.288 [INFO][3906] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:06.323454 containerd[1554]: 2026-04-17 00:08:06.288 [INFO][3906] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.67/26] IPv6=[] ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" HandleID="k8s-pod-network.8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Workload="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.323588 containerd[1554]: 2026-04-17 00:08:06.293 [INFO][3846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0", GenerateName:"calico-apiserver-56f7dfb777-", Namespace:"calico-system", SelfLink:"", UID:"b8ee9421-114f-42e6-8b33-b6b49acbf949", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f7dfb777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"calico-apiserver-56f7dfb777-nwsb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia032dfc6093", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:06.323637 containerd[1554]: 2026-04-17 00:08:06.293 [INFO][3846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.67/32] ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.323637 containerd[1554]: 2026-04-17 00:08:06.293 [INFO][3846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia032dfc6093 ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.323637 containerd[1554]: 2026-04-17 00:08:06.301 [INFO][3846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.323741 containerd[1554]: 2026-04-17 00:08:06.303 [INFO][3846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0", GenerateName:"calico-apiserver-56f7dfb777-", Namespace:"calico-system", SelfLink:"", UID:"b8ee9421-114f-42e6-8b33-b6b49acbf949", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f7dfb777", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b", Pod:"calico-apiserver-56f7dfb777-nwsb6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.12.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia032dfc6093", MAC:"a6:7d:da:02:d8:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:06.323790 containerd[1554]: 2026-04-17 00:08:06.313 [INFO][3846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" Namespace="calico-system" Pod="calico-apiserver-56f7dfb777-nwsb6" WorkloadEndpoint="172--238--171--230-k8s-calico--apiserver--56f7dfb777--nwsb6-eth0" Apr 17 00:08:06.363884 containerd[1554]: time="2026-04-17T00:08:06.363844883Z" level=info msg="connecting to shim 8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b" address="unix:///run/containerd/s/42321566506756182df7521ad6696169b34679ab64aeae8043f4d8eac9328e88" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:06.393026 containerd[1554]: time="2026-04-17T00:08:06.392938531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-glxlh,Uid:0d2c4fdb-3d93-490b-aa53-b22402e33fe4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4\"" Apr 17 00:08:06.398391 systemd[1]: Started cri-containerd-8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b.scope - libcontainer container 8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b. Apr 17 00:08:06.413517 systemd-networkd[1449]: calic483fccd089: Link UP Apr 17 00:08:06.413733 systemd-networkd[1449]: calic483fccd089: Gained carrier Apr 17 00:08:06.431456 containerd[1554]: 2026-04-17 00:08:06.117 [ERROR][3842] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:06.431456 containerd[1554]: 2026-04-17 00:08:06.144 [INFO][3842] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0 coredns-674b8bbfcf- kube-system b3226986-1113-4b4e-90f5-4d61e0f410e9 860 0 2026-04-17 00:07:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-171-230 coredns-674b8bbfcf-pcn6t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic483fccd089 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-" Apr 17 00:08:06.431456 containerd[1554]: 2026-04-17 00:08:06.145 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.431456 containerd[1554]: 2026-04-17 00:08:06.184 [INFO][3920] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" HandleID="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.198 [INFO][3920] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" HandleID="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e6170), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-171-230", "pod":"coredns-674b8bbfcf-pcn6t", "timestamp":"2026-04-17 00:08:06.184549144 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000168c60)} Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.198 [INFO][3920] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.289 [INFO][3920] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.289 [INFO][3920] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.345 [INFO][3920] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" host="172-238-171-230" Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.354 [INFO][3920] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.367 [INFO][3920] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.378 [INFO][3920] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.431716 containerd[1554]: 2026-04-17 00:08:06.382 [INFO][3920] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.382 [INFO][3920] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" host="172-238-171-230" Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.384 [INFO][3920] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481 Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.396 [INFO][3920] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" host="172-238-171-230" Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.404 [INFO][3920] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.68/26] block=192.168.12.64/26 handle="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" host="172-238-171-230" Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.404 [INFO][3920] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.68/26] handle="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" host="172-238-171-230" Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.404 [INFO][3920] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:06.431960 containerd[1554]: 2026-04-17 00:08:06.404 [INFO][3920] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.68/26] IPv6=[] ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" HandleID="k8s-pod-network.5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.432249 containerd[1554]: 2026-04-17 00:08:06.407 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b3226986-1113-4b4e-90f5-4d61e0f410e9", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"coredns-674b8bbfcf-pcn6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic483fccd089", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:06.432249 containerd[1554]: 2026-04-17 00:08:06.407 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.68/32] ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.432249 containerd[1554]: 2026-04-17 00:08:06.408 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic483fccd089 ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.432249 containerd[1554]: 2026-04-17 00:08:06.410 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.432249 containerd[1554]: 2026-04-17 00:08:06.410 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b3226986-1113-4b4e-90f5-4d61e0f410e9", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481", Pod:"coredns-674b8bbfcf-pcn6t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic483fccd089", MAC:"d6:d2:09:85:8e:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:06.432249 containerd[1554]: 2026-04-17 00:08:06.426 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" Namespace="kube-system" Pod="coredns-674b8bbfcf-pcn6t" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--pcn6t-eth0" Apr 17 00:08:06.451945 containerd[1554]: time="2026-04-17T00:08:06.451910936Z" level=info msg="connecting to shim 5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481" address="unix:///run/containerd/s/6aece90a58f60719d53ec900cad583cecc37e4f01a5ca0e1c8ee6c2f2d8cfdeb" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:06.481742 systemd[1]: Started cri-containerd-5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481.scope - libcontainer container 5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481. Apr 17 00:08:06.510932 containerd[1554]: time="2026-04-17T00:08:06.510856500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f7dfb777-nwsb6,Uid:b8ee9421-114f-42e6-8b33-b6b49acbf949,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b\"" Apr 17 00:08:06.551655 containerd[1554]: time="2026-04-17T00:08:06.551607584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pcn6t,Uid:b3226986-1113-4b4e-90f5-4d61e0f410e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481\"" Apr 17 00:08:06.553183 kubelet[2735]: E0417 00:08:06.553156 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:06.563308 containerd[1554]: time="2026-04-17T00:08:06.563265870Z" level=info msg="CreateContainer within sandbox \"5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 00:08:06.569230 containerd[1554]: time="2026-04-17T00:08:06.568791859Z" level=info msg="Container bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:06.572808 containerd[1554]: time="2026-04-17T00:08:06.572777677Z" level=info msg="CreateContainer within sandbox \"5470ca60bb7f12c4ecc0e46d79c0de601b43bd3d2b05e4186b4467f6085c8481\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31\"" Apr 17 00:08:06.573751 containerd[1554]: time="2026-04-17T00:08:06.573691782Z" level=info msg="StartContainer for \"bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31\"" Apr 17 00:08:06.574935 containerd[1554]: time="2026-04-17T00:08:06.574904565Z" level=info msg="connecting to shim bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31" address="unix:///run/containerd/s/6aece90a58f60719d53ec900cad583cecc37e4f01a5ca0e1c8ee6c2f2d8cfdeb" protocol=ttrpc version=3 Apr 17 00:08:06.598166 systemd[1]: Started cri-containerd-bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31.scope - libcontainer container bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31. Apr 17 00:08:06.635176 containerd[1554]: time="2026-04-17T00:08:06.635127863Z" level=info msg="StartContainer for \"bd6fb12c8e4e088e0b9033a3642c9fc713edf4205f0ed91f595ee252b90a0c31\" returns successfully" Apr 17 00:08:06.987437 kubelet[2735]: E0417 00:08:06.985582 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:06.990077 containerd[1554]: time="2026-04-17T00:08:06.988379869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-j6qwc,Uid:70b0eb97-2013-448c-8115-21c3dc1415a1,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:06.990361 kubelet[2735]: E0417 00:08:06.989307 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:07.007262 containerd[1554]: time="2026-04-17T00:08:07.006893069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6xmx,Uid:291fff59-a234-447d-bf37-bf2edb7a7686,Namespace:kube-system,Attempt:0,}" Apr 17 00:08:07.008936 containerd[1554]: time="2026-04-17T00:08:07.007149099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d958f-t7c9t,Uid:3ff4324b-f73c-4867-bebd-0f2f3d60a9ae,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:07.016441 kubelet[2735]: I0417 00:08:07.015150 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pcn6t" podStartSLOduration=25.015113707 podStartE2EDuration="25.015113707s" podCreationTimestamp="2026-04-17 00:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:08:07.01069062 +0000 UTC m=+31.335859372" watchObservedRunningTime="2026-04-17 00:08:07.015113707 +0000 UTC m=+31.340282459" Apr 17 00:08:07.104214 kubelet[2735]: I0417 00:08:07.104181 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-backend-key-pair\") pod \"a6b510cb-eace-469e-8840-ce52365e8af1\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " Apr 17 00:08:07.105984 kubelet[2735]: I0417 00:08:07.104890 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f4cn\" (UniqueName: \"kubernetes.io/projected/a6b510cb-eace-469e-8840-ce52365e8af1-kube-api-access-7f4cn\") pod \"a6b510cb-eace-469e-8840-ce52365e8af1\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " Apr 17 00:08:07.105984 kubelet[2735]: I0417 00:08:07.104917 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-nginx-config\") pod \"a6b510cb-eace-469e-8840-ce52365e8af1\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " Apr 17 00:08:07.105984 kubelet[2735]: I0417 00:08:07.104945 2735 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-ca-bundle\") pod \"a6b510cb-eace-469e-8840-ce52365e8af1\" (UID: \"a6b510cb-eace-469e-8840-ce52365e8af1\") " Apr 17 00:08:07.105984 kubelet[2735]: I0417 00:08:07.105699 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "a6b510cb-eace-469e-8840-ce52365e8af1" (UID: "a6b510cb-eace-469e-8840-ce52365e8af1"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 00:08:07.106719 kubelet[2735]: I0417 00:08:07.106701 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a6b510cb-eace-469e-8840-ce52365e8af1" (UID: "a6b510cb-eace-469e-8840-ce52365e8af1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 00:08:07.129231 systemd[1]: var-lib-kubelet-pods-a6b510cb\x2deace\x2d469e\x2d8840\x2dce52365e8af1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 00:08:07.131945 kubelet[2735]: I0417 00:08:07.131821 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a6b510cb-eace-469e-8840-ce52365e8af1" (UID: "a6b510cb-eace-469e-8840-ce52365e8af1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 00:08:07.133293 kubelet[2735]: I0417 00:08:07.133253 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6b510cb-eace-469e-8840-ce52365e8af1-kube-api-access-7f4cn" (OuterVolumeSpecName: "kube-api-access-7f4cn") pod "a6b510cb-eace-469e-8840-ce52365e8af1" (UID: "a6b510cb-eace-469e-8840-ce52365e8af1"). InnerVolumeSpecName "kube-api-access-7f4cn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 00:08:07.161324 systemd-networkd[1449]: cali01b2295b3d8: Gained IPv6LL Apr 17 00:08:07.206236 kubelet[2735]: I0417 00:08:07.206133 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-ca-bundle\") on node \"172-238-171-230\" DevicePath \"\"" Apr 17 00:08:07.206236 kubelet[2735]: I0417 00:08:07.206180 2735 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a6b510cb-eace-469e-8840-ce52365e8af1-whisker-backend-key-pair\") on node \"172-238-171-230\" DevicePath \"\"" Apr 17 00:08:07.206236 kubelet[2735]: I0417 00:08:07.206191 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7f4cn\" (UniqueName: \"kubernetes.io/projected/a6b510cb-eace-469e-8840-ce52365e8af1-kube-api-access-7f4cn\") on node \"172-238-171-230\" DevicePath \"\"" Apr 17 00:08:07.206236 kubelet[2735]: I0417 00:08:07.206201 2735 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a6b510cb-eace-469e-8840-ce52365e8af1-nginx-config\") on node \"172-238-171-230\" DevicePath \"\"" Apr 17 00:08:07.363580 systemd-networkd[1449]: calic7d043ed430: Link UP Apr 17 00:08:07.363947 systemd-networkd[1449]: calic7d043ed430: Gained carrier Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.192 [ERROR][4134] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.242 [INFO][4134] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0 calico-kube-controllers-7857d958f- calico-system 3ff4324b-f73c-4867-bebd-0f2f3d60a9ae 864 0 2026-04-17 00:07:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7857d958f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-238-171-230 calico-kube-controllers-7857d958f-t7c9t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7d043ed430 [] [] }} ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.242 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.307 [INFO][4258] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" HandleID="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Workload="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.314 [INFO][4258] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" HandleID="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Workload="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048cea0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-230", "pod":"calico-kube-controllers-7857d958f-t7c9t", "timestamp":"2026-04-17 00:08:07.307950807 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000256c60)} Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.315 [INFO][4258] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.315 [INFO][4258] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.315 [INFO][4258] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.319 [INFO][4258] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.323 [INFO][4258] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.329 [INFO][4258] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.331 [INFO][4258] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.334 [INFO][4258] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.334 [INFO][4258] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.336 [INFO][4258] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7 Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.340 [INFO][4258] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.345 [INFO][4258] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.69/26] block=192.168.12.64/26 handle="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.345 [INFO][4258] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.69/26] handle="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" host="172-238-171-230" Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.346 [INFO][4258] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:07.383055 containerd[1554]: 2026-04-17 00:08:07.346 [INFO][4258] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.69/26] IPv6=[] ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" HandleID="k8s-pod-network.5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Workload="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.384167 containerd[1554]: 2026-04-17 00:08:07.352 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0", GenerateName:"calico-kube-controllers-7857d958f-", Namespace:"calico-system", SelfLink:"", UID:"3ff4324b-f73c-4867-bebd-0f2f3d60a9ae", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7857d958f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"calico-kube-controllers-7857d958f-t7c9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7d043ed430", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:07.384167 containerd[1554]: 2026-04-17 00:08:07.352 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.69/32] ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.384167 containerd[1554]: 2026-04-17 00:08:07.352 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7d043ed430 ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.384167 containerd[1554]: 2026-04-17 00:08:07.360 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.384167 containerd[1554]: 2026-04-17 00:08:07.361 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0", GenerateName:"calico-kube-controllers-7857d958f-", Namespace:"calico-system", SelfLink:"", UID:"3ff4324b-f73c-4867-bebd-0f2f3d60a9ae", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7857d958f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7", Pod:"calico-kube-controllers-7857d958f-t7c9t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.12.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7d043ed430", MAC:"52:46:10:93:eb:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:07.384167 containerd[1554]: 2026-04-17 00:08:07.372 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" Namespace="calico-system" Pod="calico-kube-controllers-7857d958f-t7c9t" WorkloadEndpoint="172--238--171--230-k8s-calico--kube--controllers--7857d958f--t7c9t-eth0" Apr 17 00:08:07.418175 systemd-networkd[1449]: cali315bf0e3f7c: Gained IPv6LL Apr 17 00:08:07.450088 containerd[1554]: time="2026-04-17T00:08:07.449007649Z" level=info msg="connecting to shim 5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7" address="unix:///run/containerd/s/664adc1f0a849bbe10283d3a2ef665dcbdde3a04917286c5f9486089a3fffb93" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:07.507602 systemd[1]: Started cri-containerd-5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7.scope - libcontainer container 5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7. Apr 17 00:08:07.508171 systemd-networkd[1449]: cali28696af500c: Link UP Apr 17 00:08:07.511763 systemd-networkd[1449]: cali28696af500c: Gained carrier Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.116 [ERROR][4131] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.149 [INFO][4131] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0 coredns-674b8bbfcf- kube-system 291fff59-a234-447d-bf37-bf2edb7a7686 862 0 2026-04-17 00:07:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-238-171-230 coredns-674b8bbfcf-x6xmx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali28696af500c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.149 [INFO][4131] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.321 [INFO][4207] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" HandleID="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.337 [INFO][4207] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" HandleID="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e180), Attrs:map[string]string{"namespace":"kube-system", "node":"172-238-171-230", "pod":"coredns-674b8bbfcf-x6xmx", "timestamp":"2026-04-17 00:08:07.321444138 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ea000)} Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.337 [INFO][4207] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.346 [INFO][4207] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.346 [INFO][4207] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.422 [INFO][4207] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.429 [INFO][4207] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.450 [INFO][4207] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.455 [INFO][4207] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.462 [INFO][4207] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.462 [INFO][4207] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.464 [INFO][4207] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.475 [INFO][4207] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.487 [INFO][4207] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.70/26] block=192.168.12.64/26 handle="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.487 [INFO][4207] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.70/26] handle="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" host="172-238-171-230" Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.487 [INFO][4207] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:07.563069 containerd[1554]: 2026-04-17 00:08:07.487 [INFO][4207] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.70/26] IPv6=[] ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" HandleID="k8s-pod-network.76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Workload="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.563701 containerd[1554]: 2026-04-17 00:08:07.502 [INFO][4131] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"291fff59-a234-447d-bf37-bf2edb7a7686", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"coredns-674b8bbfcf-x6xmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28696af500c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:07.563701 containerd[1554]: 2026-04-17 00:08:07.502 [INFO][4131] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.70/32] ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.563701 containerd[1554]: 2026-04-17 00:08:07.502 [INFO][4131] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28696af500c ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.563701 containerd[1554]: 2026-04-17 00:08:07.517 [INFO][4131] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.563701 containerd[1554]: 2026-04-17 00:08:07.526 [INFO][4131] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"291fff59-a234-447d-bf37-bf2edb7a7686", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f", Pod:"coredns-674b8bbfcf-x6xmx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.12.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali28696af500c", MAC:"3e:a7:41:a5:dd:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:07.563701 containerd[1554]: 2026-04-17 00:08:07.547 [INFO][4131] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" Namespace="kube-system" Pod="coredns-674b8bbfcf-x6xmx" WorkloadEndpoint="172--238--171--230-k8s-coredns--674b8bbfcf--x6xmx-eth0" Apr 17 00:08:07.596487 systemd-networkd[1449]: cali0c831e47013: Link UP Apr 17 00:08:07.597759 systemd-networkd[1449]: cali0c831e47013: Gained carrier Apr 17 00:08:07.619644 containerd[1554]: time="2026-04-17T00:08:07.619257653Z" level=info msg="connecting to shim 76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f" address="unix:///run/containerd/s/9d74e33a0106a3822b85a124c11a4e66901dda6b78ee64b2dcb086a1d3de2f3f" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.182 [ERROR][4142] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.225 [INFO][4142] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0 goldmane-5b85766d88- calico-system 70b0eb97-2013-448c-8115-21c3dc1415a1 863 0 2026-04-17 00:07:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-238-171-230 goldmane-5b85766d88-j6qwc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali0c831e47013 [] [] }} ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.225 [INFO][4142] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.376 [INFO][4236] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" HandleID="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Workload="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.400 [INFO][4236] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" HandleID="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Workload="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002760e0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-230", "pod":"goldmane-5b85766d88-j6qwc", "timestamp":"2026-04-17 00:08:07.376765077 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000115080)} Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.401 [INFO][4236] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.488 [INFO][4236] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.488 [INFO][4236] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.526 [INFO][4236] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.536 [INFO][4236] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.549 [INFO][4236] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.558 [INFO][4236] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.561 [INFO][4236] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.561 [INFO][4236] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.567 [INFO][4236] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.574 [INFO][4236] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.582 [INFO][4236] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.71/26] block=192.168.12.64/26 handle="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.582 [INFO][4236] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.71/26] handle="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" host="172-238-171-230" Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.582 [INFO][4236] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:07.639093 containerd[1554]: 2026-04-17 00:08:07.582 [INFO][4236] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.71/26] IPv6=[] ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" HandleID="k8s-pod-network.e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Workload="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.639630 containerd[1554]: 2026-04-17 00:08:07.588 [INFO][4142] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"70b0eb97-2013-448c-8115-21c3dc1415a1", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"goldmane-5b85766d88-j6qwc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c831e47013", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:07.639630 containerd[1554]: 2026-04-17 00:08:07.588 [INFO][4142] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.71/32] ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.639630 containerd[1554]: 2026-04-17 00:08:07.588 [INFO][4142] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c831e47013 ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.639630 containerd[1554]: 2026-04-17 00:08:07.599 [INFO][4142] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.639630 containerd[1554]: 2026-04-17 00:08:07.601 [INFO][4142] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"70b0eb97-2013-448c-8115-21c3dc1415a1", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 7, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca", Pod:"goldmane-5b85766d88-j6qwc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.12.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali0c831e47013", MAC:"36:f6:72:4d:3a:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:07.639630 containerd[1554]: 2026-04-17 00:08:07.618 [INFO][4142] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" Namespace="calico-system" Pod="goldmane-5b85766d88-j6qwc" WorkloadEndpoint="172--238--171--230-k8s-goldmane--5b85766d88--j6qwc-eth0" Apr 17 00:08:07.680634 systemd[1]: Started cri-containerd-76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f.scope - libcontainer container 76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f. Apr 17 00:08:07.697772 containerd[1554]: time="2026-04-17T00:08:07.697327356Z" level=info msg="connecting to shim e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca" address="unix:///run/containerd/s/0cf1ed5801cadee240cb578990bc708b7b25a303ff4ab6691f6231601623020b" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:07.795066 containerd[1554]: time="2026-04-17T00:08:07.793924734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-x6xmx,Uid:291fff59-a234-447d-bf37-bf2edb7a7686,Namespace:kube-system,Attempt:0,} returns sandbox id \"76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f\"" Apr 17 00:08:07.795325 systemd[1]: Started cri-containerd-e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca.scope - libcontainer container e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca. Apr 17 00:08:07.799986 systemd[1]: Removed slice kubepods-besteffort-poda6b510cb_eace_469e_8840_ce52365e8af1.slice - libcontainer container kubepods-besteffort-poda6b510cb_eace_469e_8840_ce52365e8af1.slice. Apr 17 00:08:07.801522 kubelet[2735]: E0417 00:08:07.800299 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:07.801464 systemd-networkd[1449]: calia032dfc6093: Gained IPv6LL Apr 17 00:08:07.810096 containerd[1554]: time="2026-04-17T00:08:07.808196041Z" level=info msg="CreateContainer within sandbox \"76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 00:08:07.835254 containerd[1554]: time="2026-04-17T00:08:07.835151693Z" level=info msg="Container 9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:07.842162 containerd[1554]: time="2026-04-17T00:08:07.841733080Z" level=info msg="CreateContainer within sandbox \"76f2a08e33d15c7cc6a3ba1fcc3f2be2b16c5165c45f94e201e09c33df98fa3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371\"" Apr 17 00:08:07.842433 containerd[1554]: time="2026-04-17T00:08:07.842305587Z" level=info msg="StartContainer for \"9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371\"" Apr 17 00:08:07.845198 containerd[1554]: time="2026-04-17T00:08:07.844976514Z" level=info msg="connecting to shim 9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371" address="unix:///run/containerd/s/9d74e33a0106a3822b85a124c11a4e66901dda6b78ee64b2dcb086a1d3de2f3f" protocol=ttrpc version=3 Apr 17 00:08:07.865126 containerd[1554]: time="2026-04-17T00:08:07.864624874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7857d958f-t7c9t,Uid:3ff4324b-f73c-4867-bebd-0f2f3d60a9ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7\"" Apr 17 00:08:07.874210 systemd[1]: Started cri-containerd-9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371.scope - libcontainer container 9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371. Apr 17 00:08:07.927385 systemd[1]: var-lib-kubelet-pods-a6b510cb\x2deace\x2d469e\x2d8840\x2dce52365e8af1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7f4cn.mount: Deactivated successfully. Apr 17 00:08:07.943919 containerd[1554]: time="2026-04-17T00:08:07.943747131Z" level=info msg="StartContainer for \"9129300813782d8d367269f6af8d4fcdc27e23279da6c3e46dc4dd94c1079371\" returns successfully" Apr 17 00:08:08.002534 kubelet[2735]: E0417 00:08:08.002483 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:08.003620 kubelet[2735]: E0417 00:08:08.003405 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:08.030994 kubelet[2735]: I0417 00:08:08.030579 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-x6xmx" podStartSLOduration=26.030566512 podStartE2EDuration="26.030566512s" podCreationTimestamp="2026-04-17 00:07:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 00:08:08.029752975 +0000 UTC m=+32.354921727" watchObservedRunningTime="2026-04-17 00:08:08.030566512 +0000 UTC m=+32.355735264" Apr 17 00:08:08.123592 systemd-networkd[1449]: calic483fccd089: Gained IPv6LL Apr 17 00:08:08.131362 containerd[1554]: time="2026-04-17T00:08:08.131208830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-j6qwc,Uid:70b0eb97-2013-448c-8115-21c3dc1415a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca\"" Apr 17 00:08:08.188287 systemd[1]: Created slice kubepods-besteffort-pod7d6c6b09_43fa_4a91_afcc_b865ded3697d.slice - libcontainer container kubepods-besteffort-pod7d6c6b09_43fa_4a91_afcc_b865ded3697d.slice. Apr 17 00:08:08.213399 kubelet[2735]: I0417 00:08:08.213356 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65j4m\" (UniqueName: \"kubernetes.io/projected/7d6c6b09-43fa-4a91-afcc-b865ded3697d-kube-api-access-65j4m\") pod \"whisker-65fbbd585-wsm94\" (UID: \"7d6c6b09-43fa-4a91-afcc-b865ded3697d\") " pod="calico-system/whisker-65fbbd585-wsm94" Apr 17 00:08:08.213399 kubelet[2735]: I0417 00:08:08.213399 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7d6c6b09-43fa-4a91-afcc-b865ded3697d-whisker-backend-key-pair\") pod \"whisker-65fbbd585-wsm94\" (UID: \"7d6c6b09-43fa-4a91-afcc-b865ded3697d\") " pod="calico-system/whisker-65fbbd585-wsm94" Apr 17 00:08:08.213399 kubelet[2735]: I0417 00:08:08.213421 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7d6c6b09-43fa-4a91-afcc-b865ded3697d-nginx-config\") pod \"whisker-65fbbd585-wsm94\" (UID: \"7d6c6b09-43fa-4a91-afcc-b865ded3697d\") " pod="calico-system/whisker-65fbbd585-wsm94" Apr 17 00:08:08.213399 kubelet[2735]: I0417 00:08:08.213436 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d6c6b09-43fa-4a91-afcc-b865ded3697d-whisker-ca-bundle\") pod \"whisker-65fbbd585-wsm94\" (UID: \"7d6c6b09-43fa-4a91-afcc-b865ded3697d\") " pod="calico-system/whisker-65fbbd585-wsm94" Apr 17 00:08:08.271651 kubelet[2735]: I0417 00:08:08.270842 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 00:08:08.271651 kubelet[2735]: E0417 00:08:08.271201 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:08.496585 containerd[1554]: time="2026-04-17T00:08:08.496003685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65fbbd585-wsm94,Uid:7d6c6b09-43fa-4a91-afcc-b865ded3697d,Namespace:calico-system,Attempt:0,}" Apr 17 00:08:08.697173 systemd-networkd[1449]: calic7d043ed430: Gained IPv6LL Apr 17 00:08:08.704863 systemd-networkd[1449]: caliae67808eb81: Link UP Apr 17 00:08:08.706085 systemd-networkd[1449]: caliae67808eb81: Gained carrier Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.556 [ERROR][4538] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.576 [INFO][4538] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0 whisker-65fbbd585- calico-system 7d6c6b09-43fa-4a91-afcc-b865ded3697d 947 0 2026-04-17 00:08:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65fbbd585 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-238-171-230 whisker-65fbbd585-wsm94 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] caliae67808eb81 [] [] }} ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.576 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.653 [INFO][4553] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" HandleID="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Workload="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.664 [INFO][4553] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" HandleID="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Workload="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f6170), Attrs:map[string]string{"namespace":"calico-system", "node":"172-238-171-230", "pod":"whisker-65fbbd585-wsm94", "timestamp":"2026-04-17 00:08:08.653990856 +0000 UTC"}, Hostname:"172-238-171-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000152580)} Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.664 [INFO][4553] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.665 [INFO][4553] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.665 [INFO][4553] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-238-171-230' Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.667 [INFO][4553] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.673 [INFO][4553] ipam/ipam.go 409: Looking up existing affinities for host host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.678 [INFO][4553] ipam/ipam.go 526: Trying affinity for 192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.680 [INFO][4553] ipam/ipam.go 160: Attempting to load block cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.683 [INFO][4553] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.12.64/26 host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.683 [INFO][4553] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.12.64/26 handle="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.684 [INFO][4553] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700 Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.688 [INFO][4553] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.12.64/26 handle="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.695 [INFO][4553] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.12.72/26] block=192.168.12.64/26 handle="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.695 [INFO][4553] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.12.72/26] handle="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" host="172-238-171-230" Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.695 [INFO][4553] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 00:08:08.723064 containerd[1554]: 2026-04-17 00:08:08.695 [INFO][4553] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.12.72/26] IPv6=[] ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" HandleID="k8s-pod-network.bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Workload="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.723593 containerd[1554]: 2026-04-17 00:08:08.701 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0", GenerateName:"whisker-65fbbd585-", Namespace:"calico-system", SelfLink:"", UID:"7d6c6b09-43fa-4a91-afcc-b865ded3697d", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 8, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65fbbd585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"", Pod:"whisker-65fbbd585-wsm94", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae67808eb81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:08.723593 containerd[1554]: 2026-04-17 00:08:08.701 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.12.72/32] ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.723593 containerd[1554]: 2026-04-17 00:08:08.701 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae67808eb81 ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.723593 containerd[1554]: 2026-04-17 00:08:08.706 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.723593 containerd[1554]: 2026-04-17 00:08:08.707 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0", GenerateName:"whisker-65fbbd585-", Namespace:"calico-system", SelfLink:"", UID:"7d6c6b09-43fa-4a91-afcc-b865ded3697d", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 0, 8, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65fbbd585", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-238-171-230", ContainerID:"bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700", Pod:"whisker-65fbbd585-wsm94", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.12.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"caliae67808eb81", MAC:"62:2d:fd:18:1a:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 00:08:08.723593 containerd[1554]: 2026-04-17 00:08:08.718 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" Namespace="calico-system" Pod="whisker-65fbbd585-wsm94" WorkloadEndpoint="172--238--171--230-k8s-whisker--65fbbd585--wsm94-eth0" Apr 17 00:08:08.755144 containerd[1554]: time="2026-04-17T00:08:08.754262356Z" level=info msg="connecting to shim bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700" address="unix:///run/containerd/s/e478e4a49825730a9f27fea5c46bcad00239fcc9a94d86abcdd34c2e891c08b5" namespace=k8s.io protocol=ttrpc version=3 Apr 17 00:08:08.792256 systemd[1]: Started cri-containerd-bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700.scope - libcontainer container bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700. Apr 17 00:08:08.883169 containerd[1554]: time="2026-04-17T00:08:08.882877045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65fbbd585-wsm94,Uid:7d6c6b09-43fa-4a91-afcc-b865ded3697d,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700\"" Apr 17 00:08:09.010516 kubelet[2735]: E0417 00:08:09.010307 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:09.013802 kubelet[2735]: E0417 00:08:09.011563 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:09.013802 kubelet[2735]: E0417 00:08:09.012060 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:09.093160 containerd[1554]: time="2026-04-17T00:08:09.093111727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:09.093760 containerd[1554]: time="2026-04-17T00:08:09.093737854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 00:08:09.095126 containerd[1554]: time="2026-04-17T00:08:09.094348641Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:09.098522 containerd[1554]: time="2026-04-17T00:08:09.098493633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:09.101095 containerd[1554]: time="2026-04-17T00:08:09.100462825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.86378732s" Apr 17 00:08:09.101192 containerd[1554]: time="2026-04-17T00:08:09.101167072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 00:08:09.102169 containerd[1554]: time="2026-04-17T00:08:09.102151628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 00:08:09.107536 containerd[1554]: time="2026-04-17T00:08:09.107517785Z" level=info msg="CreateContainer within sandbox \"8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 00:08:09.121609 containerd[1554]: time="2026-04-17T00:08:09.121566805Z" level=info msg="Container 115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:09.129414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount35912545.mount: Deactivated successfully. Apr 17 00:08:09.139814 containerd[1554]: time="2026-04-17T00:08:09.139773526Z" level=info msg="CreateContainer within sandbox \"8d9f1aa52df13caf95c6776ab99512509b97d1bb3c4d9188d90c80d587878edc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f\"" Apr 17 00:08:09.143517 containerd[1554]: time="2026-04-17T00:08:09.143485841Z" level=info msg="StartContainer for \"115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f\"" Apr 17 00:08:09.145237 systemd-networkd[1449]: cali28696af500c: Gained IPv6LL Apr 17 00:08:09.148764 containerd[1554]: time="2026-04-17T00:08:09.146215909Z" level=info msg="connecting to shim 115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f" address="unix:///run/containerd/s/f2848db29f7048358a08d9e611691ffc72b8e28b3d38cbb86ac5c403572c2a23" protocol=ttrpc version=3 Apr 17 00:08:09.220850 systemd[1]: Started cri-containerd-115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f.scope - libcontainer container 115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f. Apr 17 00:08:09.287691 containerd[1554]: time="2026-04-17T00:08:09.286811946Z" level=info msg="StartContainer for \"115a302bc9ef8d13f02a2f6a6e2cd71c48feeca14275b9891a7b2b470478f25f\" returns successfully" Apr 17 00:08:09.594203 systemd-networkd[1449]: cali0c831e47013: Gained IPv6LL Apr 17 00:08:09.782205 kubelet[2735]: I0417 00:08:09.781749 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6b510cb-eace-469e-8840-ce52365e8af1" path="/var/lib/kubelet/pods/a6b510cb-eace-469e-8840-ce52365e8af1/volumes" Apr 17 00:08:09.964664 systemd-networkd[1449]: vxlan.calico: Link UP Apr 17 00:08:09.964676 systemd-networkd[1449]: vxlan.calico: Gained carrier Apr 17 00:08:09.978209 systemd-networkd[1449]: caliae67808eb81: Gained IPv6LL Apr 17 00:08:10.020916 kubelet[2735]: E0417 00:08:10.020835 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:10.035295 kubelet[2735]: I0417 00:08:10.035169 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-56f7dfb777-lbd8z" podStartSLOduration=14.168829198 podStartE2EDuration="17.035156366s" podCreationTimestamp="2026-04-17 00:07:53 +0000 UTC" firstStartedPulling="2026-04-17 00:08:06.235535421 +0000 UTC m=+30.560704173" lastFinishedPulling="2026-04-17 00:08:09.101862589 +0000 UTC m=+33.427031341" observedRunningTime="2026-04-17 00:08:10.033341123 +0000 UTC m=+34.358509875" watchObservedRunningTime="2026-04-17 00:08:10.035156366 +0000 UTC m=+34.360325168" Apr 17 00:08:10.606764 containerd[1554]: time="2026-04-17T00:08:10.606400032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:10.608494 containerd[1554]: time="2026-04-17T00:08:10.608472414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 00:08:10.609460 containerd[1554]: time="2026-04-17T00:08:10.609439750Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:10.614499 containerd[1554]: time="2026-04-17T00:08:10.614468850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:10.617265 containerd[1554]: time="2026-04-17T00:08:10.617243469Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.514781732s" Apr 17 00:08:10.617389 containerd[1554]: time="2026-04-17T00:08:10.617372678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 00:08:10.620439 containerd[1554]: time="2026-04-17T00:08:10.620421677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 00:08:10.622397 containerd[1554]: time="2026-04-17T00:08:10.622376239Z" level=info msg="CreateContainer within sandbox \"a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 00:08:10.636996 containerd[1554]: time="2026-04-17T00:08:10.636200885Z" level=info msg="Container 2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:10.652542 containerd[1554]: time="2026-04-17T00:08:10.652508400Z" level=info msg="CreateContainer within sandbox \"a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d\"" Apr 17 00:08:10.653245 containerd[1554]: time="2026-04-17T00:08:10.653193818Z" level=info msg="StartContainer for \"2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d\"" Apr 17 00:08:10.656754 containerd[1554]: time="2026-04-17T00:08:10.656710194Z" level=info msg="connecting to shim 2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d" address="unix:///run/containerd/s/26daf171672e365530888cfa75c037da9241792fe04254f12c4c14d8e3cc8571" protocol=ttrpc version=3 Apr 17 00:08:10.694236 systemd[1]: Started cri-containerd-2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d.scope - libcontainer container 2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d. Apr 17 00:08:10.765753 containerd[1554]: time="2026-04-17T00:08:10.765702896Z" level=info msg="StartContainer for \"2db11cac94c18798fcf05782c756289f721ae34d8493d5a93fa44117feb8108d\" returns successfully" Apr 17 00:08:10.805669 containerd[1554]: time="2026-04-17T00:08:10.805632269Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:10.807884 containerd[1554]: time="2026-04-17T00:08:10.807427172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 00:08:10.812031 containerd[1554]: time="2026-04-17T00:08:10.812008573Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 190.933059ms" Apr 17 00:08:10.812286 containerd[1554]: time="2026-04-17T00:08:10.812231783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 00:08:10.815323 containerd[1554]: time="2026-04-17T00:08:10.815022142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 00:08:10.819069 containerd[1554]: time="2026-04-17T00:08:10.818778998Z" level=info msg="CreateContainer within sandbox \"8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 00:08:10.826065 containerd[1554]: time="2026-04-17T00:08:10.825183122Z" level=info msg="Container a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:10.839534 containerd[1554]: time="2026-04-17T00:08:10.839496036Z" level=info msg="CreateContainer within sandbox \"8e1350a52fd0e6bc5d06b6c5f73146f9df52aff864a3d65663c2789c7f970e9b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936\"" Apr 17 00:08:10.842705 containerd[1554]: time="2026-04-17T00:08:10.842683203Z" level=info msg="StartContainer for \"a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936\"" Apr 17 00:08:10.843805 containerd[1554]: time="2026-04-17T00:08:10.843770869Z" level=info msg="connecting to shim a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936" address="unix:///run/containerd/s/42321566506756182df7521ad6696169b34679ab64aeae8043f4d8eac9328e88" protocol=ttrpc version=3 Apr 17 00:08:10.881014 systemd[1]: Started cri-containerd-a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936.scope - libcontainer container a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936. Apr 17 00:08:10.947388 containerd[1554]: time="2026-04-17T00:08:10.947353122Z" level=info msg="StartContainer for \"a401aac04a99d213f3fdc41fca8223cecb70cd4664d90d99e19619c063513936\" returns successfully" Apr 17 00:08:11.033674 kubelet[2735]: I0417 00:08:11.032353 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 00:08:11.033674 kubelet[2735]: E0417 00:08:11.033343 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:11.046226 kubelet[2735]: I0417 00:08:11.046177 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-56f7dfb777-nwsb6" podStartSLOduration=13.74561436 podStartE2EDuration="18.04616642s" podCreationTimestamp="2026-04-17 00:07:53 +0000 UTC" firstStartedPulling="2026-04-17 00:08:06.512959638 +0000 UTC m=+30.838128400" lastFinishedPulling="2026-04-17 00:08:10.813511708 +0000 UTC m=+35.138680460" observedRunningTime="2026-04-17 00:08:11.045814431 +0000 UTC m=+35.370983183" watchObservedRunningTime="2026-04-17 00:08:11.04616642 +0000 UTC m=+35.371335172" Apr 17 00:08:11.705246 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Apr 17 00:08:12.035144 kubelet[2735]: I0417 00:08:12.034740 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 00:08:13.622939 containerd[1554]: time="2026-04-17T00:08:13.622879669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:13.623967 containerd[1554]: time="2026-04-17T00:08:13.623847927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 00:08:13.624433 containerd[1554]: time="2026-04-17T00:08:13.624409016Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:13.626548 containerd[1554]: time="2026-04-17T00:08:13.626428790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:13.627063 containerd[1554]: time="2026-04-17T00:08:13.627022688Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.811925076s" Apr 17 00:08:13.627099 containerd[1554]: time="2026-04-17T00:08:13.627067848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 00:08:13.628352 containerd[1554]: time="2026-04-17T00:08:13.628335694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 00:08:13.646313 containerd[1554]: time="2026-04-17T00:08:13.646280351Z" level=info msg="CreateContainer within sandbox \"5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 00:08:13.657481 containerd[1554]: time="2026-04-17T00:08:13.657425327Z" level=info msg="Container 3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:13.664112 containerd[1554]: time="2026-04-17T00:08:13.664079167Z" level=info msg="CreateContainer within sandbox \"5ca051460341a4fe6ece6d6fc8be3c1841eedf14dcf3b66b979e26911a0fa7c7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05\"" Apr 17 00:08:13.664822 containerd[1554]: time="2026-04-17T00:08:13.664540896Z" level=info msg="StartContainer for \"3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05\"" Apr 17 00:08:13.665635 containerd[1554]: time="2026-04-17T00:08:13.665612413Z" level=info msg="connecting to shim 3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05" address="unix:///run/containerd/s/664adc1f0a849bbe10283d3a2ef665dcbdde3a04917286c5f9486089a3fffb93" protocol=ttrpc version=3 Apr 17 00:08:13.694187 systemd[1]: Started cri-containerd-3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05.scope - libcontainer container 3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05. Apr 17 00:08:13.760937 containerd[1554]: time="2026-04-17T00:08:13.760862559Z" level=info msg="StartContainer for \"3d08adf3c0e883f07fee8ff7f5e5c9dbe7affc9bfd972dc48c20f4cfcfbb9d05\" returns successfully" Apr 17 00:08:14.118747 kubelet[2735]: I0417 00:08:14.118684 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7857d958f-t7c9t" podStartSLOduration=14.361764524 podStartE2EDuration="20.118668698s" podCreationTimestamp="2026-04-17 00:07:54 +0000 UTC" firstStartedPulling="2026-04-17 00:08:07.871032631 +0000 UTC m=+32.196201383" lastFinishedPulling="2026-04-17 00:08:13.627936805 +0000 UTC m=+37.953105557" observedRunningTime="2026-04-17 00:08:14.059290218 +0000 UTC m=+38.384458970" watchObservedRunningTime="2026-04-17 00:08:14.118668698 +0000 UTC m=+38.443837450" Apr 17 00:08:15.023849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748189744.mount: Deactivated successfully. Apr 17 00:08:15.383860 containerd[1554]: time="2026-04-17T00:08:15.383800471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:15.384847 containerd[1554]: time="2026-04-17T00:08:15.384693438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 00:08:15.385392 containerd[1554]: time="2026-04-17T00:08:15.385360056Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:15.387193 containerd[1554]: time="2026-04-17T00:08:15.387167622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:15.387954 containerd[1554]: time="2026-04-17T00:08:15.387854501Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 1.759434647s" Apr 17 00:08:15.387954 containerd[1554]: time="2026-04-17T00:08:15.387882230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 00:08:15.390277 containerd[1554]: time="2026-04-17T00:08:15.390236624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 00:08:15.392760 containerd[1554]: time="2026-04-17T00:08:15.392709288Z" level=info msg="CreateContainer within sandbox \"e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 00:08:15.399854 containerd[1554]: time="2026-04-17T00:08:15.399177753Z" level=info msg="Container 3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:15.404556 containerd[1554]: time="2026-04-17T00:08:15.404534330Z" level=info msg="CreateContainer within sandbox \"e46b34b952e1b8d0270b82a0f080ab5df035a3b0b79985443cc387b4b5e7f1ca\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769\"" Apr 17 00:08:15.405009 containerd[1554]: time="2026-04-17T00:08:15.404993758Z" level=info msg="StartContainer for \"3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769\"" Apr 17 00:08:15.405958 containerd[1554]: time="2026-04-17T00:08:15.405937916Z" level=info msg="connecting to shim 3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769" address="unix:///run/containerd/s/0cf1ed5801cadee240cb578990bc708b7b25a303ff4ab6691f6231601623020b" protocol=ttrpc version=3 Apr 17 00:08:15.438201 systemd[1]: Started cri-containerd-3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769.scope - libcontainer container 3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769. Apr 17 00:08:15.506012 containerd[1554]: time="2026-04-17T00:08:15.505982783Z" level=info msg="StartContainer for \"3ee3f34102e1d22f89c8045c9422583037d16301eaa35d320c72e8f248c4a769\" returns successfully" Apr 17 00:08:16.071059 kubelet[2735]: I0417 00:08:16.070651 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-j6qwc" podStartSLOduration=15.830272348 podStartE2EDuration="23.070621557s" podCreationTimestamp="2026-04-17 00:07:53 +0000 UTC" firstStartedPulling="2026-04-17 00:08:08.148865748 +0000 UTC m=+32.474034500" lastFinishedPulling="2026-04-17 00:08:15.389214957 +0000 UTC m=+39.714383709" observedRunningTime="2026-04-17 00:08:16.067233515 +0000 UTC m=+40.392402287" watchObservedRunningTime="2026-04-17 00:08:16.070621557 +0000 UTC m=+40.395790309" Apr 17 00:08:16.296788 containerd[1554]: time="2026-04-17T00:08:16.296740853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:16.297543 containerd[1554]: time="2026-04-17T00:08:16.297517341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 00:08:16.298447 containerd[1554]: time="2026-04-17T00:08:16.297983540Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:16.300245 containerd[1554]: time="2026-04-17T00:08:16.300218215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:16.300807 containerd[1554]: time="2026-04-17T00:08:16.300786184Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 910.51686ms" Apr 17 00:08:16.300887 containerd[1554]: time="2026-04-17T00:08:16.300872114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 00:08:16.302166 containerd[1554]: time="2026-04-17T00:08:16.302148141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 00:08:16.305864 containerd[1554]: time="2026-04-17T00:08:16.305841813Z" level=info msg="CreateContainer within sandbox \"bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 00:08:16.311031 containerd[1554]: time="2026-04-17T00:08:16.310455453Z" level=info msg="Container 88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:16.330824 containerd[1554]: time="2026-04-17T00:08:16.330749538Z" level=info msg="CreateContainer within sandbox \"bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46\"" Apr 17 00:08:16.331553 containerd[1554]: time="2026-04-17T00:08:16.331530507Z" level=info msg="StartContainer for \"88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46\"" Apr 17 00:08:16.333559 containerd[1554]: time="2026-04-17T00:08:16.333236693Z" level=info msg="connecting to shim 88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46" address="unix:///run/containerd/s/e478e4a49825730a9f27fea5c46bcad00239fcc9a94d86abcdd34c2e891c08b5" protocol=ttrpc version=3 Apr 17 00:08:16.360355 systemd[1]: Started cri-containerd-88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46.scope - libcontainer container 88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46. Apr 17 00:08:16.424905 containerd[1554]: time="2026-04-17T00:08:16.424801993Z" level=info msg="StartContainer for \"88752f25889398c561c178d6b039577b5e2803b53db1a168b6858b4037935c46\" returns successfully" Apr 17 00:08:17.173092 containerd[1554]: time="2026-04-17T00:08:17.173024777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:17.174210 containerd[1554]: time="2026-04-17T00:08:17.174065385Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 00:08:17.174613 containerd[1554]: time="2026-04-17T00:08:17.174573354Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:17.177506 containerd[1554]: time="2026-04-17T00:08:17.177484438Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:17.178726 containerd[1554]: time="2026-04-17T00:08:17.178686156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 876.327596ms" Apr 17 00:08:17.178784 containerd[1554]: time="2026-04-17T00:08:17.178727516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 00:08:17.180439 containerd[1554]: time="2026-04-17T00:08:17.180249863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 00:08:17.185028 containerd[1554]: time="2026-04-17T00:08:17.185009283Z" level=info msg="CreateContainer within sandbox \"a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 00:08:17.194363 containerd[1554]: time="2026-04-17T00:08:17.194336695Z" level=info msg="Container ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:17.204909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3071255889.mount: Deactivated successfully. Apr 17 00:08:17.213926 containerd[1554]: time="2026-04-17T00:08:17.213892327Z" level=info msg="CreateContainer within sandbox \"a7d1be933d9a34a5d09b246c981d324eebda2d63f704272ba54e000e02acb7e4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2\"" Apr 17 00:08:17.214699 containerd[1554]: time="2026-04-17T00:08:17.214673676Z" level=info msg="StartContainer for \"ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2\"" Apr 17 00:08:17.216232 containerd[1554]: time="2026-04-17T00:08:17.216196692Z" level=info msg="connecting to shim ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2" address="unix:///run/containerd/s/26daf171672e365530888cfa75c037da9241792fe04254f12c4c14d8e3cc8571" protocol=ttrpc version=3 Apr 17 00:08:17.242171 systemd[1]: Started cri-containerd-ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2.scope - libcontainer container ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2. Apr 17 00:08:17.302324 containerd[1554]: time="2026-04-17T00:08:17.302269134Z" level=info msg="StartContainer for \"ef892f246823deaf28bd0fc54cfbb1221ca9323c64df722e171c25ec1105e2b2\" returns successfully" Apr 17 00:08:17.859234 kubelet[2735]: I0417 00:08:17.859126 2735 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 00:08:17.860908 kubelet[2735]: I0417 00:08:17.860864 2735 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 00:08:18.203547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1912505077.mount: Deactivated successfully. Apr 17 00:08:18.215021 containerd[1554]: time="2026-04-17T00:08:18.214676336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:18.215610 containerd[1554]: time="2026-04-17T00:08:18.215572814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 00:08:18.215899 containerd[1554]: time="2026-04-17T00:08:18.215875023Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:18.217761 containerd[1554]: time="2026-04-17T00:08:18.217679810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 00:08:18.218724 containerd[1554]: time="2026-04-17T00:08:18.218277789Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.037649167s" Apr 17 00:08:18.218724 containerd[1554]: time="2026-04-17T00:08:18.218307249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 00:08:18.222078 containerd[1554]: time="2026-04-17T00:08:18.222025663Z" level=info msg="CreateContainer within sandbox \"bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 00:08:18.227844 containerd[1554]: time="2026-04-17T00:08:18.227689073Z" level=info msg="Container 4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f: CDI devices from CRI Config.CDIDevices: []" Apr 17 00:08:18.242349 containerd[1554]: time="2026-04-17T00:08:18.242310647Z" level=info msg="CreateContainer within sandbox \"bc7a8a920737e4b91da50625932b76237ef7e4ed7b8078566150714e0e3da700\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f\"" Apr 17 00:08:18.243092 containerd[1554]: time="2026-04-17T00:08:18.242761287Z" level=info msg="StartContainer for \"4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f\"" Apr 17 00:08:18.244967 containerd[1554]: time="2026-04-17T00:08:18.244937413Z" level=info msg="connecting to shim 4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f" address="unix:///run/containerd/s/e478e4a49825730a9f27fea5c46bcad00239fcc9a94d86abcdd34c2e891c08b5" protocol=ttrpc version=3 Apr 17 00:08:18.266157 systemd[1]: Started cri-containerd-4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f.scope - libcontainer container 4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f. Apr 17 00:08:18.328060 containerd[1554]: time="2026-04-17T00:08:18.328002749Z" level=info msg="StartContainer for \"4426f5794d24b0c7d64e3b3eb0c54cc56676edb612095993646df42f8411350f\" returns successfully" Apr 17 00:08:19.072268 kubelet[2735]: I0417 00:08:19.072206 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65fbbd585-wsm94" podStartSLOduration=1.738065674 podStartE2EDuration="11.072188038s" podCreationTimestamp="2026-04-17 00:08:08 +0000 UTC" firstStartedPulling="2026-04-17 00:08:08.885429873 +0000 UTC m=+33.210598625" lastFinishedPulling="2026-04-17 00:08:18.219552237 +0000 UTC m=+42.544720989" observedRunningTime="2026-04-17 00:08:19.07103379 +0000 UTC m=+43.396202542" watchObservedRunningTime="2026-04-17 00:08:19.072188038 +0000 UTC m=+43.397356810" Apr 17 00:08:19.072714 kubelet[2735]: I0417 00:08:19.072576 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-glxlh" podStartSLOduration=14.288092783 podStartE2EDuration="25.072569077s" podCreationTimestamp="2026-04-17 00:07:54 +0000 UTC" firstStartedPulling="2026-04-17 00:08:06.395262469 +0000 UTC m=+30.720431221" lastFinishedPulling="2026-04-17 00:08:17.179738763 +0000 UTC m=+41.504907515" observedRunningTime="2026-04-17 00:08:18.074824039 +0000 UTC m=+42.399992791" watchObservedRunningTime="2026-04-17 00:08:19.072569077 +0000 UTC m=+43.397737829" Apr 17 00:08:37.225985 kubelet[2735]: I0417 00:08:37.225669 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 00:08:51.774871 kubelet[2735]: E0417 00:08:51.774661 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:56.773381 kubelet[2735]: E0417 00:08:56.773315 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:08:59.928588 kubelet[2735]: I0417 00:08:59.928414 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 00:09:01.773896 kubelet[2735]: E0417 00:09:01.773499 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:09:08.773457 kubelet[2735]: E0417 00:09:08.773426 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:09:13.776305 kubelet[2735]: E0417 00:09:13.774905 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:09:27.774020 kubelet[2735]: E0417 00:09:27.773760 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:09:31.775725 kubelet[2735]: E0417 00:09:31.775676 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:00.531486 update_engine[1539]: I20260417 00:10:00.531379 1539 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 17 00:10:00.531486 update_engine[1539]: I20260417 00:10:00.531443 1539 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 17 00:10:00.532188 update_engine[1539]: I20260417 00:10:00.531687 1539 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 17 00:10:00.533104 update_engine[1539]: I20260417 00:10:00.532990 1539 omaha_request_params.cc:62] Current group set to stable Apr 17 00:10:00.534107 update_engine[1539]: I20260417 00:10:00.533779 1539 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 17 00:10:00.534107 update_engine[1539]: I20260417 00:10:00.533823 1539 update_attempter.cc:643] Scheduling an action processor start. Apr 17 00:10:00.534107 update_engine[1539]: I20260417 00:10:00.533850 1539 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 00:10:00.537974 update_engine[1539]: I20260417 00:10:00.537626 1539 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 17 00:10:00.537974 update_engine[1539]: I20260417 00:10:00.537712 1539 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 00:10:00.537974 update_engine[1539]: I20260417 00:10:00.537722 1539 omaha_request_action.cc:272] Request: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: Apr 17 00:10:00.537974 update_engine[1539]: I20260417 00:10:00.537730 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:10:00.539425 locksmithd[1570]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 17 00:10:00.543519 update_engine[1539]: I20260417 00:10:00.543357 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:10:00.544069 update_engine[1539]: I20260417 00:10:00.544014 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:10:00.581333 update_engine[1539]: E20260417 00:10:00.581256 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:10:00.581481 update_engine[1539]: I20260417 00:10:00.581383 1539 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 17 00:10:04.231238 systemd[1]: Started sshd@7-172.238.171.230:22-20.229.252.112:36794.service - OpenSSH per-connection server daemon (20.229.252.112:36794). Apr 17 00:10:04.771448 sshd[5611]: Accepted publickey for core from 20.229.252.112 port 36794 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:04.774471 sshd-session[5611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:04.781212 systemd-logind[1526]: New session 8 of user core. Apr 17 00:10:04.790186 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 00:10:05.168071 sshd[5614]: Connection closed by 20.229.252.112 port 36794 Apr 17 00:10:05.169506 sshd-session[5611]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:05.174740 systemd-logind[1526]: Session 8 logged out. Waiting for processes to exit. Apr 17 00:10:05.176013 systemd[1]: sshd@7-172.238.171.230:22-20.229.252.112:36794.service: Deactivated successfully. Apr 17 00:10:05.179453 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 00:10:05.181867 systemd-logind[1526]: Removed session 8. Apr 17 00:10:07.774455 kubelet[2735]: E0417 00:10:07.774391 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:10.274989 systemd[1]: Started sshd@8-172.238.171.230:22-20.229.252.112:39204.service - OpenSSH per-connection server daemon (20.229.252.112:39204). Apr 17 00:10:10.439398 update_engine[1539]: I20260417 00:10:10.439335 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:10:10.439834 update_engine[1539]: I20260417 00:10:10.439424 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:10:10.439834 update_engine[1539]: I20260417 00:10:10.439785 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:10:10.441514 update_engine[1539]: E20260417 00:10:10.441481 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:10:10.441556 update_engine[1539]: I20260417 00:10:10.441534 1539 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 17 00:10:10.772788 kubelet[2735]: E0417 00:10:10.772744 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:10.803570 sshd[5653]: Accepted publickey for core from 20.229.252.112 port 39204 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:10.805323 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:10.810877 systemd-logind[1526]: New session 9 of user core. Apr 17 00:10:10.816319 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 00:10:11.177888 sshd[5656]: Connection closed by 20.229.252.112 port 39204 Apr 17 00:10:11.180144 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:11.184630 systemd[1]: sshd@8-172.238.171.230:22-20.229.252.112:39204.service: Deactivated successfully. Apr 17 00:10:11.187485 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 00:10:11.188536 systemd-logind[1526]: Session 9 logged out. Waiting for processes to exit. Apr 17 00:10:11.191026 systemd-logind[1526]: Removed session 9. Apr 17 00:10:14.773706 kubelet[2735]: E0417 00:10:14.773654 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:16.285268 systemd[1]: Started sshd@9-172.238.171.230:22-20.229.252.112:33990.service - OpenSSH per-connection server daemon (20.229.252.112:33990). Apr 17 00:10:16.810471 sshd[5714]: Accepted publickey for core from 20.229.252.112 port 33990 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:16.812091 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:16.820235 systemd-logind[1526]: New session 10 of user core. Apr 17 00:10:16.825204 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 00:10:17.183818 sshd[5717]: Connection closed by 20.229.252.112 port 33990 Apr 17 00:10:17.186431 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:17.191497 systemd[1]: sshd@9-172.238.171.230:22-20.229.252.112:33990.service: Deactivated successfully. Apr 17 00:10:17.193676 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 00:10:17.195084 systemd-logind[1526]: Session 10 logged out. Waiting for processes to exit. Apr 17 00:10:17.196548 systemd-logind[1526]: Removed session 10. Apr 17 00:10:17.291241 systemd[1]: Started sshd@10-172.238.171.230:22-20.229.252.112:33998.service - OpenSSH per-connection server daemon (20.229.252.112:33998). Apr 17 00:10:17.815920 sshd[5750]: Accepted publickey for core from 20.229.252.112 port 33998 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:17.817430 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:17.822457 systemd-logind[1526]: New session 11 of user core. Apr 17 00:10:17.827221 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 00:10:18.220745 sshd[5753]: Connection closed by 20.229.252.112 port 33998 Apr 17 00:10:18.223006 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:18.229206 systemd[1]: sshd@10-172.238.171.230:22-20.229.252.112:33998.service: Deactivated successfully. Apr 17 00:10:18.232357 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 00:10:18.233951 systemd-logind[1526]: Session 11 logged out. Waiting for processes to exit. Apr 17 00:10:18.236102 systemd-logind[1526]: Removed session 11. Apr 17 00:10:18.334315 systemd[1]: Started sshd@11-172.238.171.230:22-20.229.252.112:34012.service - OpenSSH per-connection server daemon (20.229.252.112:34012). Apr 17 00:10:18.858071 sshd[5763]: Accepted publickey for core from 20.229.252.112 port 34012 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:18.859639 sshd-session[5763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:18.864155 systemd-logind[1526]: New session 12 of user core. Apr 17 00:10:18.867176 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 00:10:19.239409 sshd[5766]: Connection closed by 20.229.252.112 port 34012 Apr 17 00:10:19.241272 sshd-session[5763]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:19.245845 systemd[1]: sshd@11-172.238.171.230:22-20.229.252.112:34012.service: Deactivated successfully. Apr 17 00:10:19.249013 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 00:10:19.251180 systemd-logind[1526]: Session 12 logged out. Waiting for processes to exit. Apr 17 00:10:19.254032 systemd-logind[1526]: Removed session 12. Apr 17 00:10:20.436176 update_engine[1539]: I20260417 00:10:20.436105 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:10:20.436654 update_engine[1539]: I20260417 00:10:20.436198 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:10:20.436718 update_engine[1539]: I20260417 00:10:20.436689 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:10:20.437466 update_engine[1539]: E20260417 00:10:20.437428 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:10:20.437513 update_engine[1539]: I20260417 00:10:20.437486 1539 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 17 00:10:24.348307 systemd[1]: Started sshd@12-172.238.171.230:22-20.229.252.112:34014.service - OpenSSH per-connection server daemon (20.229.252.112:34014). Apr 17 00:10:24.874969 sshd[5778]: Accepted publickey for core from 20.229.252.112 port 34014 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:24.876551 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:24.887959 systemd-logind[1526]: New session 13 of user core. Apr 17 00:10:24.893189 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 00:10:25.263901 sshd[5781]: Connection closed by 20.229.252.112 port 34014 Apr 17 00:10:25.264717 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:25.271031 systemd[1]: sshd@12-172.238.171.230:22-20.229.252.112:34014.service: Deactivated successfully. Apr 17 00:10:25.271406 systemd-logind[1526]: Session 13 logged out. Waiting for processes to exit. Apr 17 00:10:25.273817 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 00:10:25.275976 systemd-logind[1526]: Removed session 13. Apr 17 00:10:25.370395 systemd[1]: Started sshd@13-172.238.171.230:22-20.229.252.112:53288.service - OpenSSH per-connection server daemon (20.229.252.112:53288). Apr 17 00:10:25.894098 sshd[5793]: Accepted publickey for core from 20.229.252.112 port 53288 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:25.895786 sshd-session[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:25.901960 systemd-logind[1526]: New session 14 of user core. Apr 17 00:10:25.907200 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 00:10:26.428557 sshd[5818]: Connection closed by 20.229.252.112 port 53288 Apr 17 00:10:26.430734 sshd-session[5793]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:26.435365 systemd-logind[1526]: Session 14 logged out. Waiting for processes to exit. Apr 17 00:10:26.436392 systemd[1]: sshd@13-172.238.171.230:22-20.229.252.112:53288.service: Deactivated successfully. Apr 17 00:10:26.439541 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 00:10:26.441155 systemd-logind[1526]: Removed session 14. Apr 17 00:10:26.538256 systemd[1]: Started sshd@14-172.238.171.230:22-20.229.252.112:53294.service - OpenSSH per-connection server daemon (20.229.252.112:53294). Apr 17 00:10:27.069821 sshd[5827]: Accepted publickey for core from 20.229.252.112 port 53294 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:27.071665 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:27.077243 systemd-logind[1526]: New session 15 of user core. Apr 17 00:10:27.085259 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 00:10:27.978018 sshd[5830]: Connection closed by 20.229.252.112 port 53294 Apr 17 00:10:27.981262 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:27.985756 systemd-logind[1526]: Session 15 logged out. Waiting for processes to exit. Apr 17 00:10:27.986416 systemd[1]: sshd@14-172.238.171.230:22-20.229.252.112:53294.service: Deactivated successfully. Apr 17 00:10:27.992334 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 00:10:27.996566 systemd-logind[1526]: Removed session 15. Apr 17 00:10:28.084665 systemd[1]: Started sshd@15-172.238.171.230:22-20.229.252.112:53302.service - OpenSSH per-connection server daemon (20.229.252.112:53302). Apr 17 00:10:28.603491 sshd[5855]: Accepted publickey for core from 20.229.252.112 port 53302 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:28.605012 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:28.610669 systemd-logind[1526]: New session 16 of user core. Apr 17 00:10:28.616208 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 00:10:28.773647 kubelet[2735]: E0417 00:10:28.773609 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:29.112575 sshd[5858]: Connection closed by 20.229.252.112 port 53302 Apr 17 00:10:29.114669 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:29.120324 systemd[1]: sshd@15-172.238.171.230:22-20.229.252.112:53302.service: Deactivated successfully. Apr 17 00:10:29.123989 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 00:10:29.125143 systemd-logind[1526]: Session 16 logged out. Waiting for processes to exit. Apr 17 00:10:29.127536 systemd-logind[1526]: Removed session 16. Apr 17 00:10:29.223538 systemd[1]: Started sshd@16-172.238.171.230:22-20.229.252.112:53312.service - OpenSSH per-connection server daemon (20.229.252.112:53312). Apr 17 00:10:29.762079 sshd[5891]: Accepted publickey for core from 20.229.252.112 port 53312 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:29.763783 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:29.769122 systemd-logind[1526]: New session 17 of user core. Apr 17 00:10:29.774335 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 00:10:30.125573 sshd[5894]: Connection closed by 20.229.252.112 port 53312 Apr 17 00:10:30.126363 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:30.130564 systemd-logind[1526]: Session 17 logged out. Waiting for processes to exit. Apr 17 00:10:30.131394 systemd[1]: sshd@16-172.238.171.230:22-20.229.252.112:53312.service: Deactivated successfully. Apr 17 00:10:30.133836 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 00:10:30.135648 systemd-logind[1526]: Removed session 17. Apr 17 00:10:30.436234 update_engine[1539]: I20260417 00:10:30.436079 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:10:30.436234 update_engine[1539]: I20260417 00:10:30.436178 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:10:30.436825 update_engine[1539]: I20260417 00:10:30.436753 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:10:30.437804 update_engine[1539]: E20260417 00:10:30.437561 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:10:30.437804 update_engine[1539]: I20260417 00:10:30.437794 1539 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 00:10:30.437901 update_engine[1539]: I20260417 00:10:30.437808 1539 omaha_request_action.cc:617] Omaha request response: Apr 17 00:10:30.437901 update_engine[1539]: E20260417 00:10:30.437880 1539 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 17 00:10:30.437980 update_engine[1539]: I20260417 00:10:30.437901 1539 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 17 00:10:30.437980 update_engine[1539]: I20260417 00:10:30.437908 1539 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 00:10:30.437980 update_engine[1539]: I20260417 00:10:30.437914 1539 update_attempter.cc:306] Processing Done. Apr 17 00:10:30.437980 update_engine[1539]: E20260417 00:10:30.437928 1539 update_attempter.cc:619] Update failed. Apr 17 00:10:30.437980 update_engine[1539]: I20260417 00:10:30.437935 1539 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 17 00:10:30.437980 update_engine[1539]: I20260417 00:10:30.437942 1539 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 17 00:10:30.437980 update_engine[1539]: I20260417 00:10:30.437948 1539 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 17 00:10:30.438644 update_engine[1539]: I20260417 00:10:30.438014 1539 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 17 00:10:30.438644 update_engine[1539]: I20260417 00:10:30.438033 1539 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 17 00:10:30.438644 update_engine[1539]: I20260417 00:10:30.438061 1539 omaha_request_action.cc:272] Request: Apr 17 00:10:30.438644 update_engine[1539]: Apr 17 00:10:30.438644 update_engine[1539]: Apr 17 00:10:30.438644 update_engine[1539]: Apr 17 00:10:30.438644 update_engine[1539]: Apr 17 00:10:30.438644 update_engine[1539]: Apr 17 00:10:30.438644 update_engine[1539]: Apr 17 00:10:30.438644 update_engine[1539]: I20260417 00:10:30.438069 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 17 00:10:30.438644 update_engine[1539]: I20260417 00:10:30.438093 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 17 00:10:30.438644 update_engine[1539]: I20260417 00:10:30.438637 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 17 00:10:30.439004 locksmithd[1570]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 17 00:10:30.439512 update_engine[1539]: E20260417 00:10:30.439150 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439188 1539 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439198 1539 omaha_request_action.cc:617] Omaha request response: Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439203 1539 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439210 1539 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439216 1539 update_attempter.cc:306] Processing Done. Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439224 1539 update_attempter.cc:310] Error event sent. Apr 17 00:10:30.439512 update_engine[1539]: I20260417 00:10:30.439231 1539 update_check_scheduler.cc:74] Next update check in 46m38s Apr 17 00:10:30.440034 locksmithd[1570]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 17 00:10:35.234372 systemd[1]: Started sshd@17-172.238.171.230:22-20.229.252.112:54240.service - OpenSSH per-connection server daemon (20.229.252.112:54240). Apr 17 00:10:35.753849 sshd[5907]: Accepted publickey for core from 20.229.252.112 port 54240 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:35.755839 sshd-session[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:35.761336 systemd-logind[1526]: New session 18 of user core. Apr 17 00:10:35.766245 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 00:10:35.773607 kubelet[2735]: E0417 00:10:35.773584 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:36.115882 sshd[5910]: Connection closed by 20.229.252.112 port 54240 Apr 17 00:10:36.117238 sshd-session[5907]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:36.121514 systemd[1]: sshd@17-172.238.171.230:22-20.229.252.112:54240.service: Deactivated successfully. Apr 17 00:10:36.124754 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 00:10:36.127064 systemd-logind[1526]: Session 18 logged out. Waiting for processes to exit. Apr 17 00:10:36.133630 systemd-logind[1526]: Removed session 18. Apr 17 00:10:39.773938 kubelet[2735]: E0417 00:10:39.773488 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:40.773307 kubelet[2735]: E0417 00:10:40.773274 2735 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 17 00:10:41.222904 systemd[1]: Started sshd@18-172.238.171.230:22-20.229.252.112:54244.service - OpenSSH per-connection server daemon (20.229.252.112:54244). Apr 17 00:10:41.748690 sshd[5950]: Accepted publickey for core from 20.229.252.112 port 54244 ssh2: RSA SHA256:lCTIuX3gOVTmTwjQkn3/WzgoOyQCvkNFEXg4QaE1G6A Apr 17 00:10:41.750875 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 00:10:41.757779 systemd-logind[1526]: New session 19 of user core. Apr 17 00:10:41.768185 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 00:10:42.116714 sshd[5953]: Connection closed by 20.229.252.112 port 54244 Apr 17 00:10:42.118274 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Apr 17 00:10:42.122782 systemd[1]: sshd@18-172.238.171.230:22-20.229.252.112:54244.service: Deactivated successfully. Apr 17 00:10:42.125088 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 00:10:42.125880 systemd-logind[1526]: Session 19 logged out. Waiting for processes to exit. Apr 17 00:10:42.127440 systemd-logind[1526]: Removed session 19.