Apr 24 00:34:22.954164 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 24 00:34:22.954215 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:34:22.954225 kernel: BIOS-provided physical RAM map: Apr 24 00:34:22.954231 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 24 00:34:22.954263 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 24 00:34:22.954269 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 00:34:22.954280 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 24 00:34:22.954286 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 24 00:34:22.954292 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 00:34:22.954298 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 00:34:22.954304 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:34:22.954310 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 00:34:22.954316 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 24 00:34:22.954345 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 24 00:34:22.954356 kernel: NX (Execute Disable) protection: active Apr 24 00:34:22.954367 kernel: APIC: Static calls initialized Apr 24 00:34:22.954374 kernel: SMBIOS 2.8 present. Apr 24 00:34:22.954380 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 24 00:34:22.954410 kernel: DMI: Memory slots populated: 1/1 Apr 24 00:34:22.954426 kernel: Hypervisor detected: KVM Apr 24 00:34:22.956494 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 00:34:22.956502 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 00:34:22.956508 kernel: kvm-clock: using sched offset of 7748964565 cycles Apr 24 00:34:22.956515 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 00:34:22.956523 kernel: tsc: Detected 1999.998 MHz processor Apr 24 00:34:22.956530 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 00:34:22.956537 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 00:34:22.956543 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 24 00:34:22.956550 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 00:34:22.956644 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 00:34:22.956676 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 00:34:22.956682 kernel: Using GB pages for direct mapping Apr 24 00:34:22.956689 kernel: ACPI: Early table checksum verification disabled Apr 24 00:34:22.956696 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 24 00:34:22.956702 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956709 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956716 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956722 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 24 00:34:22.956729 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956738 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956748 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956755 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:34:22.956762 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 24 00:34:22.956794 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 24 00:34:22.956805 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 24 00:34:22.956812 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 24 00:34:22.956819 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 24 00:34:22.956848 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 24 00:34:22.956856 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 24 00:34:22.956863 kernel: No NUMA configuration found Apr 24 00:34:22.956889 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 24 00:34:22.956896 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Apr 24 00:34:22.956903 kernel: Zone ranges: Apr 24 00:34:22.956913 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 00:34:22.956920 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 00:34:22.956927 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 24 00:34:22.956934 kernel: Device empty Apr 24 00:34:22.956941 kernel: Movable zone start for each node Apr 24 00:34:22.956948 kernel: Early memory node ranges Apr 24 00:34:22.956955 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 00:34:22.956962 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 24 00:34:22.956969 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 24 00:34:22.956975 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 24 00:34:22.956984 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 00:34:22.956991 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 00:34:22.956998 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 24 00:34:22.957005 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 00:34:22.957079 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 00:34:22.957088 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 00:34:22.957095 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 00:34:22.957102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 00:34:22.957109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 00:34:22.957140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 00:34:22.957149 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 00:34:22.957156 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 00:34:22.957163 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 00:34:22.957170 kernel: TSC deadline timer available Apr 24 00:34:22.957177 kernel: CPU topo: Max. logical packages: 1 Apr 24 00:34:22.957205 kernel: CPU topo: Max. logical dies: 1 Apr 24 00:34:22.957212 kernel: CPU topo: Max. dies per package: 1 Apr 24 00:34:22.957219 kernel: CPU topo: Max. threads per core: 1 Apr 24 00:34:22.957229 kernel: CPU topo: Num. cores per package: 2 Apr 24 00:34:22.957236 kernel: CPU topo: Num. threads per package: 2 Apr 24 00:34:22.957243 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 24 00:34:22.957250 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 00:34:22.957257 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 00:34:22.957264 kernel: kvm-guest: setup PV sched yield Apr 24 00:34:22.957270 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 00:34:22.957277 kernel: Booting paravirtualized kernel on KVM Apr 24 00:34:22.957284 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 00:34:22.957293 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 00:34:22.957325 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 24 00:34:22.957332 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 24 00:34:22.957339 kernel: pcpu-alloc: [0] 0 1 Apr 24 00:34:22.957346 kernel: kvm-guest: PV spinlocks enabled Apr 24 00:34:22.957353 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 00:34:22.957384 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:34:22.957391 kernel: random: crng init done Apr 24 00:34:22.957422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 00:34:22.957431 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 00:34:22.957453 kernel: Fallback order for Node 0: 0 Apr 24 00:34:22.957488 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Apr 24 00:34:22.957496 kernel: Policy zone: Normal Apr 24 00:34:22.957503 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 00:34:22.957510 kernel: software IO TLB: area num 2. Apr 24 00:34:22.957516 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 00:34:22.957523 kernel: ftrace: allocating 40126 entries in 157 pages Apr 24 00:34:22.957534 kernel: ftrace: allocated 157 pages with 5 groups Apr 24 00:34:22.957541 kernel: Dynamic Preempt: voluntary Apr 24 00:34:22.957548 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 00:34:22.957587 kernel: rcu: RCU event tracing is enabled. Apr 24 00:34:22.957595 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 00:34:22.957602 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 00:34:22.957631 kernel: Rude variant of Tasks RCU enabled. Apr 24 00:34:22.957638 kernel: Tracing variant of Tasks RCU enabled. Apr 24 00:34:22.957645 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 00:34:22.957673 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 00:34:22.957684 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:34:22.957698 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:34:22.957707 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:34:22.957738 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 00:34:22.957745 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 00:34:22.957753 kernel: Console: colour VGA+ 80x25 Apr 24 00:34:22.957760 kernel: printk: legacy console [tty0] enabled Apr 24 00:34:22.957767 kernel: printk: legacy console [ttyS0] enabled Apr 24 00:34:22.957775 kernel: ACPI: Core revision 20240827 Apr 24 00:34:22.957786 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 00:34:22.957794 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 00:34:22.957801 kernel: x2apic enabled Apr 24 00:34:22.957831 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 00:34:22.957838 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 00:34:22.957846 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 00:34:22.957853 kernel: kvm-guest: setup PV IPIs Apr 24 00:34:22.957864 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 00:34:22.957871 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Apr 24 00:34:22.957878 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 24 00:34:22.957929 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 00:34:22.957937 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 24 00:34:22.957944 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 24 00:34:22.957952 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 00:34:22.957959 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 00:34:22.957966 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 00:34:22.957976 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 24 00:34:22.958007 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 24 00:34:22.958015 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 24 00:34:22.958022 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 24 00:34:22.959306 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 24 00:34:22.959321 kernel: active return thunk: srso_alias_return_thunk Apr 24 00:34:22.959328 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 24 00:34:22.959335 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 24 00:34:22.959347 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 00:34:22.959354 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 00:34:22.959361 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 00:34:22.959368 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 00:34:22.959375 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 00:34:22.959462 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 00:34:22.959475 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 24 00:34:22.959486 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 24 00:34:22.959498 kernel: Freeing SMP alternatives memory: 32K Apr 24 00:34:22.959514 kernel: pid_max: default: 32768 minimum: 301 Apr 24 00:34:22.959526 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 24 00:34:22.959570 kernel: landlock: Up and running. Apr 24 00:34:22.959582 kernel: SELinux: Initializing. Apr 24 00:34:22.959594 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:34:22.959606 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:34:22.959618 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 24 00:34:22.959682 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 24 00:34:22.959695 kernel: ... version: 0 Apr 24 00:34:22.959711 kernel: ... bit width: 48 Apr 24 00:34:22.959723 kernel: ... generic registers: 6 Apr 24 00:34:22.959760 kernel: ... value mask: 0000ffffffffffff Apr 24 00:34:22.959772 kernel: ... max period: 00007fffffffffff Apr 24 00:34:22.959784 kernel: ... fixed-purpose events: 0 Apr 24 00:34:22.959820 kernel: ... event mask: 000000000000003f Apr 24 00:34:22.959832 kernel: signal: max sigframe size: 3376 Apr 24 00:34:22.959844 kernel: rcu: Hierarchical SRCU implementation. Apr 24 00:34:22.959856 kernel: rcu: Max phase no-delay instances is 400. Apr 24 00:34:22.959904 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 24 00:34:22.959939 kernel: smp: Bringing up secondary CPUs ... Apr 24 00:34:22.959951 kernel: smpboot: x86: Booting SMP configuration: Apr 24 00:34:22.959963 kernel: .... node #0, CPUs: #1 Apr 24 00:34:22.959997 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 00:34:22.960010 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 24 00:34:22.960050 kernel: Memory: 3953608K/4193772K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 235480K reserved, 0K cma-reserved) Apr 24 00:34:22.960087 kernel: devtmpfs: initialized Apr 24 00:34:22.960123 kernel: x86/mm: Memory block size: 128MB Apr 24 00:34:22.960141 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 00:34:22.960153 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 00:34:22.960190 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 00:34:22.960202 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 00:34:22.960214 kernel: audit: initializing netlink subsys (disabled) Apr 24 00:34:22.960251 kernel: audit: type=2000 audit(1776990860.062:1): state=initialized audit_enabled=0 res=1 Apr 24 00:34:22.960263 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 00:34:22.960275 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 00:34:22.960287 kernel: cpuidle: using governor menu Apr 24 00:34:22.960302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 00:34:22.960340 kernel: dca service started, version 1.12.1 Apr 24 00:34:22.960352 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 24 00:34:22.960389 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 00:34:22.960402 kernel: PCI: Using configuration type 1 for base access Apr 24 00:34:22.960414 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 00:34:22.960462 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 00:34:22.960476 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 00:34:22.960515 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 00:34:22.960532 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 00:34:22.960544 kernel: ACPI: Added _OSI(Module Device) Apr 24 00:34:22.960582 kernel: ACPI: Added _OSI(Processor Device) Apr 24 00:34:22.960619 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 00:34:22.960634 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 00:34:22.960643 kernel: ACPI: Interpreter enabled Apr 24 00:34:22.960650 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 00:34:22.960657 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 00:34:22.960665 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 00:34:22.960700 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 00:34:22.960709 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 00:34:22.960721 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 00:34:22.964540 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 00:34:22.964820 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 00:34:22.965263 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 00:34:22.965412 kernel: PCI host bridge to bus 0000:00 Apr 24 00:34:22.965685 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 00:34:22.965951 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 00:34:22.966162 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 00:34:22.968523 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 24 00:34:22.968744 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 00:34:22.969040 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 24 00:34:22.969323 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 00:34:22.971625 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 24 00:34:22.971887 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 24 00:34:22.972120 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 24 00:34:22.972430 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 24 00:34:22.972710 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 24 00:34:22.972915 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 00:34:22.973194 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Apr 24 00:34:22.973408 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Apr 24 00:34:22.975672 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 24 00:34:22.975804 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 00:34:22.975969 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 24 00:34:22.976348 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Apr 24 00:34:22.976708 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 24 00:34:22.976845 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 00:34:22.977030 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 24 00:34:22.977164 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 24 00:34:22.977303 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 00:34:22.979506 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 24 00:34:22.979960 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Apr 24 00:34:22.980107 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Apr 24 00:34:22.980629 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 24 00:34:22.981003 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 24 00:34:22.981018 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 00:34:22.981027 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 00:34:22.981037 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 00:34:22.981045 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 00:34:22.981052 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 00:34:22.981064 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 00:34:22.981072 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 00:34:22.981079 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 00:34:22.981086 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 00:34:22.981096 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 00:34:22.981104 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 00:34:22.981111 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 00:34:22.981118 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 00:34:22.981126 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 00:34:22.981135 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 00:34:22.981143 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 00:34:22.981150 kernel: iommu: Default domain type: Translated Apr 24 00:34:22.981157 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 00:34:22.981165 kernel: PCI: Using ACPI for IRQ routing Apr 24 00:34:22.981175 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 00:34:22.981186 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 24 00:34:22.981196 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 24 00:34:22.982716 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 00:34:22.983158 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 00:34:22.983317 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 00:34:22.983389 kernel: vgaarb: loaded Apr 24 00:34:22.983400 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 00:34:22.984503 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 00:34:22.984516 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 00:34:22.984524 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 00:34:22.984532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 00:34:22.984540 kernel: pnp: PnP ACPI init Apr 24 00:34:22.984797 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 00:34:22.984867 kernel: pnp: PnP ACPI: found 5 devices Apr 24 00:34:22.984875 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 00:34:22.984883 kernel: NET: Registered PF_INET protocol family Apr 24 00:34:22.984892 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 00:34:22.984927 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 00:34:22.984936 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 00:34:22.984970 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 00:34:22.984984 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 00:34:22.984991 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 00:34:22.984999 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:34:22.985006 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:34:22.985013 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 00:34:22.985021 kernel: NET: Registered PF_XDP protocol family Apr 24 00:34:22.985188 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 00:34:22.985345 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 00:34:22.985629 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 00:34:22.985858 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 24 00:34:22.986085 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 00:34:22.986270 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 24 00:34:22.986283 kernel: PCI: CLS 0 bytes, default 64 Apr 24 00:34:22.986291 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 00:34:22.986327 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 24 00:34:22.986335 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Apr 24 00:34:22.986366 kernel: Initialise system trusted keyrings Apr 24 00:34:22.986378 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 00:34:22.986386 kernel: Key type asymmetric registered Apr 24 00:34:22.986394 kernel: Asymmetric key parser 'x509' registered Apr 24 00:34:22.986402 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 00:34:22.986410 kernel: io scheduler mq-deadline registered Apr 24 00:34:22.986417 kernel: io scheduler kyber registered Apr 24 00:34:22.986424 kernel: io scheduler bfq registered Apr 24 00:34:22.986431 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 00:34:22.988562 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 00:34:22.988578 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 00:34:22.988587 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 00:34:22.988595 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 00:34:22.988602 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 00:34:22.988634 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 00:34:22.988647 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 00:34:22.988660 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 00:34:22.988942 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 24 00:34:22.989211 kernel: rtc_cmos 00:03: registered as rtc0 Apr 24 00:34:22.989507 kernel: rtc_cmos 00:03: setting system clock to 2026-04-24T00:34:22 UTC (1776990862) Apr 24 00:34:22.989768 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 00:34:22.989783 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 24 00:34:22.989824 kernel: NET: Registered PF_INET6 protocol family Apr 24 00:34:22.989832 kernel: Segment Routing with IPv6 Apr 24 00:34:22.989839 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 00:34:22.989868 kernel: NET: Registered PF_PACKET protocol family Apr 24 00:34:22.989877 kernel: Key type dns_resolver registered Apr 24 00:34:22.989889 kernel: IPI shorthand broadcast: enabled Apr 24 00:34:22.989897 kernel: sched_clock: Marking stable (2968005723, 362959515)->(3436947236, -105981998) Apr 24 00:34:22.989926 kernel: registered taskstats version 1 Apr 24 00:34:22.989934 kernel: Loading compiled-in X.509 certificates Apr 24 00:34:22.989942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 24 00:34:22.989949 kernel: Demotion targets for Node 0: null Apr 24 00:34:22.989956 kernel: Key type .fscrypt registered Apr 24 00:34:22.989964 kernel: Key type fscrypt-provisioning registered Apr 24 00:34:22.989993 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 00:34:22.990005 kernel: ima: Allocated hash algorithm: sha1 Apr 24 00:34:22.990012 kernel: ima: No architecture policies found Apr 24 00:34:22.990020 kernel: clk: Disabling unused clocks Apr 24 00:34:22.990027 kernel: Warning: unable to open an initial console. Apr 24 00:34:22.990035 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 24 00:34:22.990042 kernel: Write protecting the kernel read-only data: 40960k Apr 24 00:34:22.990050 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 24 00:34:22.990080 kernel: Run /init as init process Apr 24 00:34:22.990087 kernel: with arguments: Apr 24 00:34:22.990098 kernel: /init Apr 24 00:34:22.990105 kernel: with environment: Apr 24 00:34:22.990130 kernel: HOME=/ Apr 24 00:34:22.990163 kernel: TERM=linux Apr 24 00:34:22.990172 systemd[1]: Successfully made /usr/ read-only. Apr 24 00:34:22.990183 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:34:22.990192 systemd[1]: Detected virtualization kvm. Apr 24 00:34:22.990226 systemd[1]: Detected architecture x86-64. Apr 24 00:34:22.990235 systemd[1]: Running in initrd. Apr 24 00:34:22.990243 systemd[1]: No hostname configured, using default hostname. Apr 24 00:34:22.990251 systemd[1]: Hostname set to . Apr 24 00:34:22.990279 systemd[1]: Initializing machine ID from random generator. Apr 24 00:34:22.990287 systemd[1]: Queued start job for default target initrd.target. Apr 24 00:34:22.990296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:34:22.990304 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:34:22.990317 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 00:34:22.990347 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:34:22.990356 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 00:34:22.990365 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 00:34:22.990374 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 00:34:22.990383 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 00:34:22.990391 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:34:22.990403 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:34:22.990411 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:34:22.990419 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:34:22.990427 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:34:22.992517 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:34:22.992531 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:34:22.992539 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:34:22.992579 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 00:34:22.992592 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 24 00:34:22.992604 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:34:22.992613 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:34:22.992651 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:34:22.992660 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:34:22.992690 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 00:34:22.992702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:34:22.992710 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 00:34:22.992718 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 24 00:34:22.992727 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 00:34:22.992764 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:34:22.992772 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:34:22.992781 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:34:22.992789 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 00:34:22.992834 systemd-journald[186]: Collecting audit messages is disabled. Apr 24 00:34:22.992863 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:34:22.992871 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 00:34:22.992880 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:34:22.992920 systemd-journald[186]: Journal started Apr 24 00:34:22.992961 systemd-journald[186]: Runtime Journal (/run/log/journal/74f4e14c41c248a9a127597a5231c33a) is 8M, max 78.2M, 70.2M free. Apr 24 00:34:22.965209 systemd-modules-load[187]: Inserted module 'overlay' Apr 24 00:34:23.162430 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 00:34:23.162504 kernel: Bridge firewalling registered Apr 24 00:34:23.013052 systemd-modules-load[187]: Inserted module 'br_netfilter' Apr 24 00:34:23.199343 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:34:23.201746 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:34:23.204705 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:34:23.207296 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:34:23.213678 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 00:34:23.217646 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:34:23.228564 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:34:23.234575 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:34:23.242685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:34:23.251449 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:34:23.256579 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 00:34:23.261604 systemd-tmpfiles[209]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 24 00:34:23.269570 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:34:23.270838 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:34:23.275993 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:34:23.291464 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:34:23.335369 systemd-resolved[226]: Positive Trust Anchors: Apr 24 00:34:23.335390 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:34:23.337562 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:34:23.340845 systemd-resolved[226]: Defaulting to hostname 'linux'. Apr 24 00:34:23.346052 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:34:23.347671 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:34:23.395486 kernel: SCSI subsystem initialized Apr 24 00:34:23.407562 kernel: Loading iSCSI transport class v2.0-870. Apr 24 00:34:23.420475 kernel: iscsi: registered transport (tcp) Apr 24 00:34:23.443668 kernel: iscsi: registered transport (qla4xxx) Apr 24 00:34:23.443711 kernel: QLogic iSCSI HBA Driver Apr 24 00:34:23.470196 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:34:23.485682 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:34:23.488660 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:34:23.543730 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 00:34:23.546386 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 00:34:23.610478 kernel: raid6: avx2x4 gen() 23689 MB/s Apr 24 00:34:23.629474 kernel: raid6: avx2x2 gen() 22100 MB/s Apr 24 00:34:23.648578 kernel: raid6: avx2x1 gen() 7331 MB/s Apr 24 00:34:23.648668 kernel: raid6: using algorithm avx2x4 gen() 23689 MB/s Apr 24 00:34:23.670601 kernel: raid6: .... xor() 4058 MB/s, rmw enabled Apr 24 00:34:23.670679 kernel: raid6: using avx2x2 recovery algorithm Apr 24 00:34:23.693481 kernel: xor: automatically using best checksumming function avx Apr 24 00:34:23.842485 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 00:34:23.850248 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:34:23.853020 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:34:23.881592 systemd-udevd[435]: Using default interface naming scheme 'v255'. Apr 24 00:34:23.888821 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:34:23.892993 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 00:34:23.923853 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Apr 24 00:34:23.957474 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:34:23.960974 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:34:24.038721 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:34:24.045625 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 00:34:24.136398 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 24 00:34:24.136479 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Apr 24 00:34:24.136687 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 00:34:24.143454 kernel: scsi host0: Virtio SCSI HBA Apr 24 00:34:24.151462 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 00:34:24.155788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:34:24.155907 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:34:24.160789 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:34:24.165704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:34:24.171486 kernel: libata version 3.00 loaded. Apr 24 00:34:24.172093 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:34:24.464137 kernel: AES CTR mode by8 optimization enabled Apr 24 00:34:24.466521 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 00:34:24.466816 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 00:34:24.468572 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 24 00:34:24.468748 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 24 00:34:24.468894 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 00:34:24.475656 kernel: scsi host1: ahci Apr 24 00:34:24.475986 kernel: scsi host2: ahci Apr 24 00:34:24.476218 kernel: scsi host3: ahci Apr 24 00:34:24.477184 kernel: scsi host4: ahci Apr 24 00:34:24.478270 kernel: scsi host5: ahci Apr 24 00:34:24.478504 kernel: scsi host6: ahci Apr 24 00:34:24.478686 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 29 lpm-pol 1 Apr 24 00:34:24.478700 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 29 lpm-pol 1 Apr 24 00:34:24.478712 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 29 lpm-pol 1 Apr 24 00:34:24.478728 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 29 lpm-pol 1 Apr 24 00:34:24.478738 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 29 lpm-pol 1 Apr 24 00:34:24.478748 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 29 lpm-pol 1 Apr 24 00:34:24.481499 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 24 00:34:24.481720 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 24 00:34:24.481882 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 00:34:24.482035 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 24 00:34:24.482299 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 00:34:24.498571 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 00:34:24.498602 kernel: GPT:9289727 != 167739391 Apr 24 00:34:24.498614 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 00:34:24.498631 kernel: GPT:9289727 != 167739391 Apr 24 00:34:24.498646 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 00:34:24.498663 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:34:24.498683 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 00:34:24.700740 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:34:24.804468 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 00:34:24.804584 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 00:34:24.804597 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 00:34:24.805758 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 00:34:24.810465 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 00:34:24.813480 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 24 00:34:24.884317 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 00:34:24.896919 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 00:34:24.911575 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 00:34:24.919953 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 00:34:24.920917 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 00:34:24.932997 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 00:34:24.935177 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:34:24.936015 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:34:24.937914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:34:24.941742 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 00:34:24.946239 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 00:34:24.957286 disk-uuid[617]: Primary Header is updated. Apr 24 00:34:24.957286 disk-uuid[617]: Secondary Entries is updated. Apr 24 00:34:24.957286 disk-uuid[617]: Secondary Header is updated. Apr 24 00:34:24.967732 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:34:24.972535 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:34:24.982467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:34:25.988074 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:34:25.989876 disk-uuid[619]: The operation has completed successfully. Apr 24 00:34:26.050627 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 00:34:26.050783 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 00:34:26.073716 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 00:34:26.092571 sh[639]: Success Apr 24 00:34:26.113824 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 00:34:26.113965 kernel: device-mapper: uevent: version 1.0.3 Apr 24 00:34:26.113983 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 24 00:34:26.128491 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 24 00:34:26.175104 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 00:34:26.181534 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 00:34:26.199946 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 00:34:26.212464 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (651) Apr 24 00:34:26.216841 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 24 00:34:26.216869 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:34:26.228500 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 24 00:34:26.228546 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 24 00:34:26.233098 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 24 00:34:26.234576 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 00:34:26.236628 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:34:26.238334 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 00:34:26.240549 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 00:34:26.241938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 00:34:26.276598 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (683) Apr 24 00:34:26.282478 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:34:26.282536 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:34:26.294039 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:34:26.294081 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:34:26.294094 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:34:26.305483 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:34:26.306454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 00:34:26.311598 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 00:34:26.410556 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:34:26.415679 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:34:26.426854 ignition[755]: Ignition 2.22.0 Apr 24 00:34:26.426878 ignition[755]: Stage: fetch-offline Apr 24 00:34:26.426912 ignition[755]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:26.426923 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:26.430410 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:34:26.427011 ignition[755]: parsed url from cmdline: "" Apr 24 00:34:26.427016 ignition[755]: no config URL provided Apr 24 00:34:26.427022 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:34:26.427032 ignition[755]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:34:26.427038 ignition[755]: failed to fetch config: resource requires networking Apr 24 00:34:26.427287 ignition[755]: Ignition finished successfully Apr 24 00:34:26.461713 systemd-networkd[825]: lo: Link UP Apr 24 00:34:26.461722 systemd-networkd[825]: lo: Gained carrier Apr 24 00:34:26.463428 systemd-networkd[825]: Enumeration completed Apr 24 00:34:26.465562 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:34:26.467001 systemd[1]: Reached target network.target - Network. Apr 24 00:34:26.467043 systemd-networkd[825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:34:26.467048 systemd-networkd[825]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:34:26.469491 systemd-networkd[825]: eth0: Link UP Apr 24 00:34:26.470398 systemd-networkd[825]: eth0: Gained carrier Apr 24 00:34:26.470413 systemd-networkd[825]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:34:26.472222 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 00:34:26.502410 ignition[829]: Ignition 2.22.0 Apr 24 00:34:26.503495 ignition[829]: Stage: fetch Apr 24 00:34:26.503693 ignition[829]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:26.503707 ignition[829]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:26.503811 ignition[829]: parsed url from cmdline: "" Apr 24 00:34:26.503815 ignition[829]: no config URL provided Apr 24 00:34:26.503823 ignition[829]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:34:26.503833 ignition[829]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:34:26.503865 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 24 00:34:26.504034 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:34:26.704680 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 24 00:34:26.704879 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:34:27.105087 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 24 00:34:27.105263 ignition[829]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:34:27.224557 systemd-networkd[825]: eth0: DHCPv4 address 172.236.108.90/24, gateway 172.236.108.1 acquired from 23.205.167.215 Apr 24 00:34:27.575682 systemd-networkd[825]: eth0: Gained IPv6LL Apr 24 00:34:27.906010 ignition[829]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 24 00:34:27.986915 ignition[829]: PUT result: OK Apr 24 00:34:27.987017 ignition[829]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 24 00:34:28.097778 ignition[829]: GET result: OK Apr 24 00:34:28.097916 ignition[829]: parsing config with SHA512: 94c03feca4631515a273b7558b7884319c4f250e86d1a04b3a8753e5b750b905421ea5dec13a68072f36168e6b252523b80b3e27f747aad9f82f9e532a44bf5f Apr 24 00:34:28.102359 unknown[829]: fetched base config from "system" Apr 24 00:34:28.102996 ignition[829]: fetch: fetch complete Apr 24 00:34:28.102376 unknown[829]: fetched base config from "system" Apr 24 00:34:28.103002 ignition[829]: fetch: fetch passed Apr 24 00:34:28.102394 unknown[829]: fetched user config from "akamai" Apr 24 00:34:28.103049 ignition[829]: Ignition finished successfully Apr 24 00:34:28.107175 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 00:34:28.124574 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 00:34:28.153251 ignition[836]: Ignition 2.22.0 Apr 24 00:34:28.153264 ignition[836]: Stage: kargs Apr 24 00:34:28.153395 ignition[836]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:28.153407 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:28.156105 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 00:34:28.154310 ignition[836]: kargs: kargs passed Apr 24 00:34:28.154357 ignition[836]: Ignition finished successfully Apr 24 00:34:28.160580 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 00:34:28.182125 ignition[842]: Ignition 2.22.0 Apr 24 00:34:28.183063 ignition[842]: Stage: disks Apr 24 00:34:28.183186 ignition[842]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:28.183197 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:28.183844 ignition[842]: disks: disks passed Apr 24 00:34:28.186570 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 00:34:28.183884 ignition[842]: Ignition finished successfully Apr 24 00:34:28.188227 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 00:34:28.189342 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 00:34:28.190881 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:34:28.192334 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:34:28.193900 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:34:28.196401 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 00:34:28.223946 systemd-fsck[850]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 24 00:34:28.226170 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 00:34:28.229324 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 00:34:28.339734 kernel: EXT4-fs (sda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 24 00:34:28.343266 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 00:34:28.345082 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 00:34:28.347320 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:34:28.350512 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 00:34:28.353992 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 00:34:28.354075 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 00:34:28.354126 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:34:28.363481 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 00:34:28.366779 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 00:34:28.379195 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (858) Apr 24 00:34:28.379221 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:34:28.379233 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:34:28.386101 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:34:28.386128 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:34:28.386140 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:34:28.391694 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:34:28.438130 initrd-setup-root[882]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 00:34:28.446024 initrd-setup-root[889]: cut: /sysroot/etc/group: No such file or directory Apr 24 00:34:28.453976 initrd-setup-root[896]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 00:34:28.459464 initrd-setup-root[903]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 00:34:28.573708 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 00:34:28.576306 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 00:34:28.578803 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 00:34:28.597707 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 00:34:28.601184 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:34:28.620302 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 00:34:28.638453 ignition[971]: INFO : Ignition 2.22.0 Apr 24 00:34:28.638453 ignition[971]: INFO : Stage: mount Apr 24 00:34:28.640163 ignition[971]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:28.640163 ignition[971]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:28.640163 ignition[971]: INFO : mount: mount passed Apr 24 00:34:28.640163 ignition[971]: INFO : Ignition finished successfully Apr 24 00:34:28.641912 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 00:34:28.645411 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 00:34:29.342871 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:34:29.367471 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (982) Apr 24 00:34:29.371969 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:34:29.372002 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:34:29.379812 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:34:29.379843 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:34:29.384050 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:34:29.386844 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:34:29.420924 ignition[999]: INFO : Ignition 2.22.0 Apr 24 00:34:29.420924 ignition[999]: INFO : Stage: files Apr 24 00:34:29.422874 ignition[999]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:29.422874 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:29.422874 ignition[999]: DEBUG : files: compiled without relabeling support, skipping Apr 24 00:34:29.425932 ignition[999]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 00:34:29.425932 ignition[999]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 00:34:29.427929 ignition[999]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 00:34:29.427929 ignition[999]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 00:34:29.429961 ignition[999]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 00:34:29.427965 unknown[999]: wrote ssh authorized keys file for user: core Apr 24 00:34:29.431848 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:34:29.431848 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 00:34:29.637329 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 00:34:29.679293 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:34:29.681214 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 24 00:34:30.100914 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 00:34:30.714067 ignition[999]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 24 00:34:30.714067 ignition[999]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 00:34:30.717658 ignition[999]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 00:34:30.719627 ignition[999]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:34:30.719627 ignition[999]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:34:30.719627 ignition[999]: INFO : files: files passed Apr 24 00:34:30.719627 ignition[999]: INFO : Ignition finished successfully Apr 24 00:34:30.721915 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 00:34:30.727566 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 00:34:30.730707 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 00:34:30.744362 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 00:34:30.745116 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 00:34:30.751949 initrd-setup-root-after-ignition[1029]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:34:30.751949 initrd-setup-root-after-ignition[1029]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:34:30.755534 initrd-setup-root-after-ignition[1033]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:34:30.756700 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:34:30.758453 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 00:34:30.760671 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 00:34:30.806574 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 00:34:30.806730 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 00:34:30.808732 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 00:34:30.809854 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 00:34:30.811476 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 00:34:30.812326 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 00:34:30.831614 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:34:30.834518 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 00:34:30.853873 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:34:30.854724 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:34:30.856378 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 00:34:30.858108 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 00:34:30.858484 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:34:30.860071 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 00:34:30.861108 systemd[1]: Stopped target basic.target - Basic System. Apr 24 00:34:30.862724 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 00:34:30.864116 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:34:30.865567 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 00:34:30.867135 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:34:30.868735 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 00:34:30.870321 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:34:30.871907 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 00:34:30.873524 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 00:34:30.875150 systemd[1]: Stopped target swap.target - Swaps. Apr 24 00:34:30.876648 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 00:34:30.876823 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:34:30.878497 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:34:30.879500 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:34:30.880940 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 00:34:30.881049 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:34:30.882654 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 00:34:30.882790 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 00:34:30.884737 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 00:34:30.884898 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:34:30.885843 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 00:34:30.885939 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 00:34:30.889666 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 00:34:30.890833 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 00:34:30.892574 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:34:30.895836 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 00:34:30.896543 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 00:34:30.896694 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:34:30.898363 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 00:34:30.898485 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:34:30.937690 ignition[1053]: INFO : Ignition 2.22.0 Apr 24 00:34:30.937690 ignition[1053]: INFO : Stage: umount Apr 24 00:34:30.937690 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:34:30.937690 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:34:30.937690 ignition[1053]: INFO : umount: umount passed Apr 24 00:34:30.937690 ignition[1053]: INFO : Ignition finished successfully Apr 24 00:34:30.906121 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 00:34:30.906232 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 00:34:30.933461 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 00:34:30.933655 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 00:34:30.936924 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 00:34:30.936999 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 00:34:30.939594 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 00:34:30.939664 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 00:34:30.943687 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 00:34:30.943756 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 00:34:30.945040 systemd[1]: Stopped target network.target - Network. Apr 24 00:34:30.945735 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 00:34:30.945793 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:34:30.948711 systemd[1]: Stopped target paths.target - Path Units. Apr 24 00:34:30.949835 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 00:34:30.953687 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:34:30.955626 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 00:34:30.957341 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 00:34:30.958812 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 00:34:30.958868 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:34:30.960414 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 00:34:30.960481 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:34:30.962416 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 00:34:30.962492 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 00:34:30.963894 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 00:34:30.963945 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 00:34:30.965764 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 00:34:30.967346 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 00:34:30.970739 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 00:34:30.971576 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 00:34:30.971723 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 00:34:30.975550 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 00:34:30.975687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 00:34:30.981133 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 24 00:34:30.981433 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 00:34:30.981656 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 00:34:30.984908 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 24 00:34:30.987919 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 24 00:34:30.989753 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 00:34:30.989808 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:34:30.991479 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 00:34:30.991549 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 00:34:30.994544 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 00:34:30.996981 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 00:34:30.997049 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:34:31.000004 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:34:31.000074 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:34:31.002557 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 00:34:31.002614 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 00:34:31.004279 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 00:34:31.004332 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:34:31.006236 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:34:31.008019 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:34:31.008089 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:34:31.024405 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 00:34:31.034932 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:34:31.036408 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 00:34:31.036522 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 00:34:31.038068 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 00:34:31.038116 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:34:31.039770 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 00:34:31.039826 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:34:31.042089 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 00:34:31.042147 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 00:34:31.043726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 00:34:31.043781 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:34:31.047587 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 00:34:31.048988 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 24 00:34:31.049060 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:34:31.052642 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 00:34:31.052714 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:34:31.055008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:34:31.055065 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:34:31.058267 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 24 00:34:31.058335 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 24 00:34:31.058389 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:34:31.058851 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 00:34:31.058983 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 00:34:31.066800 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 00:34:31.066931 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 00:34:31.068370 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 00:34:31.070898 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 00:34:31.089955 systemd[1]: Switching root. Apr 24 00:34:31.147140 systemd-journald[186]: Journal stopped Apr 24 00:34:32.428260 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Apr 24 00:34:32.428291 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 00:34:32.428304 kernel: SELinux: policy capability open_perms=1 Apr 24 00:34:32.428313 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 00:34:32.428322 kernel: SELinux: policy capability always_check_network=0 Apr 24 00:34:32.428333 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 00:34:32.428343 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 00:34:32.428352 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 00:34:32.428362 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 00:34:32.428372 kernel: SELinux: policy capability userspace_initial_context=0 Apr 24 00:34:32.428381 kernel: audit: type=1403 audit(1776990871.367:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 00:34:32.428391 systemd[1]: Successfully loaded SELinux policy in 72.343ms. Apr 24 00:34:32.428404 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.138ms. Apr 24 00:34:32.428415 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:34:32.428426 systemd[1]: Detected virtualization kvm. Apr 24 00:34:32.428455 systemd[1]: Detected architecture x86-64. Apr 24 00:34:32.428468 systemd[1]: Detected first boot. Apr 24 00:34:32.428478 systemd[1]: Initializing machine ID from random generator. Apr 24 00:34:32.428488 zram_generator::config[1096]: No configuration found. Apr 24 00:34:32.428499 kernel: Guest personality initialized and is inactive Apr 24 00:34:32.428508 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 24 00:34:32.428517 kernel: Initialized host personality Apr 24 00:34:32.428526 kernel: NET: Registered PF_VSOCK protocol family Apr 24 00:34:32.428536 systemd[1]: Populated /etc with preset unit settings. Apr 24 00:34:32.428549 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 24 00:34:32.428559 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 00:34:32.428570 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 00:34:32.428580 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 00:34:32.428590 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 00:34:32.428600 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 00:34:32.428610 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 00:34:32.428623 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 00:34:32.428634 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 00:34:32.428644 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 00:34:32.428654 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 00:34:32.428664 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 00:34:32.428674 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:34:32.428685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:34:32.428695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 00:34:32.428707 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 00:34:32.428720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 00:34:32.428731 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:34:32.428741 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 00:34:32.428751 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:34:32.428762 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:34:32.428772 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 00:34:32.428784 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 00:34:32.428795 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 00:34:32.428805 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 00:34:32.428815 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:34:32.428825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:34:32.428836 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:34:32.428846 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:34:32.428857 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 00:34:32.428867 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 00:34:32.428880 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 24 00:34:32.428891 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:34:32.428901 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:34:32.428911 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:34:32.428924 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 00:34:32.428934 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 00:34:32.428944 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 00:34:32.428955 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 00:34:32.428965 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:32.428975 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 00:34:32.428986 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 00:34:32.428996 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 00:34:32.429009 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 00:34:32.429019 systemd[1]: Reached target machines.target - Containers. Apr 24 00:34:32.429029 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 00:34:32.429040 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:34:32.429050 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:34:32.429061 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 00:34:32.429071 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:34:32.429082 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:34:32.429093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:34:32.429105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 00:34:32.429115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:34:32.429126 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 00:34:32.429136 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 00:34:32.429147 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 00:34:32.429157 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 00:34:32.429167 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 00:34:32.429178 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:34:32.429353 kernel: loop: module loaded Apr 24 00:34:32.429369 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:34:32.429385 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:34:32.429401 kernel: ACPI: bus type drm_connector registered Apr 24 00:34:32.429413 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:34:32.429424 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 00:34:32.429478 kernel: fuse: init (API version 7.41) Apr 24 00:34:32.429491 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 24 00:34:32.429506 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:34:32.429517 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 00:34:32.429527 systemd[1]: Stopped verity-setup.service. Apr 24 00:34:32.429539 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:32.429573 systemd-journald[1187]: Collecting audit messages is disabled. Apr 24 00:34:32.429597 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 00:34:32.429727 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 00:34:32.429746 systemd-journald[1187]: Journal started Apr 24 00:34:32.429766 systemd-journald[1187]: Runtime Journal (/run/log/journal/aa0f7dec85f94e1099087da7a93b2db5) is 8M, max 78.2M, 70.2M free. Apr 24 00:34:32.021314 systemd[1]: Queued start job for default target multi-user.target. Apr 24 00:34:32.027563 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 00:34:32.028080 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 00:34:32.439540 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:34:32.439123 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 00:34:32.440074 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 00:34:32.440966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 00:34:32.441863 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 00:34:32.442965 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 00:34:32.444089 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:34:32.445415 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 00:34:32.445730 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 00:34:32.446876 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:34:32.447156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:34:32.448584 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:34:32.448851 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:34:32.449989 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:34:32.450245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:34:32.451357 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 00:34:32.451746 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 00:34:32.452920 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:34:32.453123 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:34:32.454262 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:34:32.455360 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:34:32.456729 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 00:34:32.457960 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 24 00:34:32.468853 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:34:32.473512 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 00:34:32.475324 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 00:34:32.476153 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 00:34:32.476236 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:34:32.477890 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 24 00:34:32.482626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 00:34:32.486612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:34:32.489549 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 00:34:32.494535 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 00:34:32.495311 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:34:32.496724 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 00:34:32.498036 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:34:32.502761 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:34:32.507639 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 00:34:32.511980 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 00:34:32.517768 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 00:34:32.519922 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 00:34:32.528529 systemd-journald[1187]: Time spent on flushing to /var/log/journal/aa0f7dec85f94e1099087da7a93b2db5 is 37.690ms for 1007 entries. Apr 24 00:34:32.528529 systemd-journald[1187]: System Journal (/var/log/journal/aa0f7dec85f94e1099087da7a93b2db5) is 8M, max 195.6M, 187.6M free. Apr 24 00:34:32.596823 systemd-journald[1187]: Received client request to flush runtime journal. Apr 24 00:34:32.596858 kernel: loop0: detected capacity change from 0 to 8 Apr 24 00:34:32.596872 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 00:34:32.563891 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 00:34:32.565255 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 00:34:32.569772 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 24 00:34:32.599934 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 00:34:32.619457 kernel: loop1: detected capacity change from 0 to 128560 Apr 24 00:34:32.614843 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:34:32.627867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:34:32.642255 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 00:34:32.646822 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 24 00:34:32.650755 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:34:32.659483 kernel: loop2: detected capacity change from 0 to 217752 Apr 24 00:34:32.684867 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Apr 24 00:34:32.685478 systemd-tmpfiles[1244]: ACLs are not supported, ignoring. Apr 24 00:34:32.692375 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:34:32.695501 kernel: loop3: detected capacity change from 0 to 110984 Apr 24 00:34:32.744791 kernel: loop4: detected capacity change from 0 to 8 Apr 24 00:34:32.755461 kernel: loop5: detected capacity change from 0 to 128560 Apr 24 00:34:32.776813 kernel: loop6: detected capacity change from 0 to 217752 Apr 24 00:34:32.798518 kernel: loop7: detected capacity change from 0 to 110984 Apr 24 00:34:32.818046 (sd-merge)[1250]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 24 00:34:32.820755 (sd-merge)[1250]: Merged extensions into '/usr'. Apr 24 00:34:32.826390 systemd[1]: Reload requested from client PID 1221 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 00:34:32.826407 systemd[1]: Reloading... Apr 24 00:34:32.969467 zram_generator::config[1276]: No configuration found. Apr 24 00:34:33.023048 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 00:34:33.164580 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 00:34:33.164788 systemd[1]: Reloading finished in 337 ms. Apr 24 00:34:33.185372 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 00:34:33.186604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 00:34:33.187759 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 00:34:33.197885 systemd[1]: Starting ensure-sysext.service... Apr 24 00:34:33.201548 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:34:33.212550 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:34:33.229538 systemd[1]: Reload requested from client PID 1320 ('systemctl') (unit ensure-sysext.service)... Apr 24 00:34:33.229554 systemd[1]: Reloading... Apr 24 00:34:33.233816 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 24 00:34:33.234214 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 24 00:34:33.234807 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 00:34:33.235528 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 00:34:33.237026 systemd-tmpfiles[1321]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 00:34:33.237371 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Apr 24 00:34:33.237546 systemd-tmpfiles[1321]: ACLs are not supported, ignoring. Apr 24 00:34:33.245111 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:34:33.245127 systemd-tmpfiles[1321]: Skipping /boot Apr 24 00:34:33.264089 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Apr 24 00:34:33.266358 systemd-tmpfiles[1321]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:34:33.266369 systemd-tmpfiles[1321]: Skipping /boot Apr 24 00:34:33.336492 zram_generator::config[1352]: No configuration found. Apr 24 00:34:33.553512 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 00:34:33.553585 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 00:34:33.598628 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 00:34:33.599022 systemd[1]: Reloading finished in 369 ms. Apr 24 00:34:33.608734 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:34:33.611455 kernel: ACPI: button: Power Button [PWRF] Apr 24 00:34:33.611850 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:34:33.640673 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:34:33.646043 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 00:34:33.650417 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 00:34:33.660504 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 00:34:33.666472 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 00:34:33.667076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:34:33.669840 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:34:33.675256 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 00:34:33.683479 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:33.683648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:34:33.687626 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:34:33.691741 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:34:33.702564 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:34:33.704668 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:34:33.704787 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:34:33.704874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:33.712741 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 00:34:33.720185 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:33.720855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:34:33.721094 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:34:33.721225 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:34:33.721359 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:33.728382 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 00:34:33.747061 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:33.749620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:34:33.759357 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:34:33.761620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:34:33.761657 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:34:33.767789 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 00:34:33.768874 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:34:33.771006 systemd[1]: Finished ensure-sysext.service. Apr 24 00:34:33.772059 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 00:34:33.781749 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 00:34:33.809663 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 00:34:33.814070 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 00:34:33.825039 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 00:34:33.827993 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:34:33.831924 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:34:33.843613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:34:33.849576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:34:33.852286 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:34:33.854697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:34:33.863915 augenrules[1483]: No rules Apr 24 00:34:33.884465 kernel: EDAC MC: Ver: 3.0.0 Apr 24 00:34:33.906812 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:34:33.907507 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:34:33.911014 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:34:33.911255 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:34:33.934899 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:34:33.935052 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:34:33.939177 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:34:33.976432 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 00:34:33.996799 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 00:34:34.015574 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 00:34:34.036770 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 00:34:34.183666 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 00:34:34.187145 systemd-resolved[1446]: Positive Trust Anchors: Apr 24 00:34:34.187658 systemd-resolved[1446]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:34:34.187728 systemd-resolved[1446]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:34:34.188049 systemd-networkd[1444]: lo: Link UP Apr 24 00:34:34.188061 systemd-networkd[1444]: lo: Gained carrier Apr 24 00:34:34.192991 systemd-networkd[1444]: Enumeration completed Apr 24 00:34:34.193048 systemd-resolved[1446]: Defaulting to hostname 'linux'. Apr 24 00:34:34.193419 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:34:34.193424 systemd-networkd[1444]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:34:34.194590 systemd-timesyncd[1471]: No network connectivity, watching for changes. Apr 24 00:34:34.196271 systemd-networkd[1444]: eth0: Link UP Apr 24 00:34:34.196467 systemd-networkd[1444]: eth0: Gained carrier Apr 24 00:34:34.196489 systemd-networkd[1444]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:34:34.204448 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:34:34.205556 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:34:34.207012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:34:34.209022 systemd[1]: Reached target network.target - Network. Apr 24 00:34:34.209969 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:34:34.210899 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:34:34.211990 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 00:34:34.213026 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 00:34:34.213919 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 24 00:34:34.214750 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 00:34:34.215586 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 00:34:34.215625 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:34:34.216548 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 00:34:34.217829 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 00:34:34.218752 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 00:34:34.219587 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:34:34.221911 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 00:34:34.224936 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 00:34:34.228342 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 24 00:34:34.229296 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 24 00:34:34.230072 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 24 00:34:34.232993 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 00:34:34.234047 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 24 00:34:34.236134 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 24 00:34:34.239546 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 00:34:34.240866 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 00:34:34.242865 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:34:34.243750 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:34:34.244594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:34:34.244628 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:34:34.253512 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 00:34:34.256338 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 00:34:34.261572 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 00:34:34.266606 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 00:34:34.271883 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 00:34:34.275620 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 00:34:34.297786 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 00:34:34.299771 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 24 00:34:34.303852 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 00:34:34.310578 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 00:34:34.316494 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 00:34:34.323642 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 00:34:34.329461 jq[1518]: false Apr 24 00:34:34.335079 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 00:34:34.337369 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 00:34:34.339081 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 00:34:34.339491 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Apr 24 00:34:34.339720 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Apr 24 00:34:34.341614 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Apr 24 00:34:34.341655 oslogin_cache_refresh[1520]: Failure getting users, quitting Apr 24 00:34:34.341708 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:34:34.341734 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:34:34.341805 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Apr 24 00:34:34.341845 oslogin_cache_refresh[1520]: Refreshing group entry cache Apr 24 00:34:34.342367 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Apr 24 00:34:34.342406 oslogin_cache_refresh[1520]: Failure getting groups, quitting Apr 24 00:34:34.342474 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:34:34.342501 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:34:34.344995 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 00:34:34.349296 extend-filesystems[1519]: Found /dev/sda6 Apr 24 00:34:34.352491 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 00:34:34.357489 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 24 00:34:34.365367 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 00:34:34.367382 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 00:34:34.368942 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 00:34:34.369397 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 24 00:34:34.369797 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 24 00:34:34.371997 extend-filesystems[1519]: Found /dev/sda9 Apr 24 00:34:34.374276 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 00:34:34.376688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 00:34:34.398888 extend-filesystems[1519]: Checking size of /dev/sda9 Apr 24 00:34:34.410181 jq[1533]: true Apr 24 00:34:34.442537 coreos-metadata[1515]: Apr 24 00:34:34.440 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 00:34:34.446906 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 00:34:34.447231 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 00:34:34.452834 extend-filesystems[1519]: Resized partition /dev/sda9 Apr 24 00:34:34.456896 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 00:34:34.471626 extend-filesystems[1565]: resize2fs 1.47.3 (8-Jul-2025) Apr 24 00:34:34.474948 tar[1538]: linux-amd64/LICENSE Apr 24 00:34:34.474948 tar[1538]: linux-amd64/helm Apr 24 00:34:34.478464 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 24 00:34:34.481586 update_engine[1531]: I20260424 00:34:34.479068 1531 main.cc:92] Flatcar Update Engine starting Apr 24 00:34:34.498301 jq[1556]: true Apr 24 00:34:34.520371 dbus-daemon[1516]: [system] SELinux support is enabled Apr 24 00:34:34.520612 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 00:34:34.522818 systemd-logind[1530]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 00:34:34.525650 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 00:34:34.525681 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 00:34:34.526540 systemd-logind[1530]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 00:34:34.526777 systemd-logind[1530]: New seat seat0. Apr 24 00:34:34.528772 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 00:34:34.528794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 00:34:34.530297 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 00:34:34.545720 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 24 00:34:34.546785 update_engine[1531]: I20260424 00:34:34.546504 1531 update_check_scheduler.cc:74] Next update check in 4m44s Apr 24 00:34:34.548354 systemd[1]: Started update-engine.service - Update Engine. Apr 24 00:34:34.554700 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 00:34:34.642887 bash[1583]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:34:34.644485 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 00:34:34.650708 systemd[1]: Starting sshkeys.service... Apr 24 00:34:34.697194 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 00:34:34.702402 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 00:34:34.916561 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 24 00:34:34.928789 extend-filesystems[1565]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 00:34:34.928789 extend-filesystems[1565]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 24 00:34:34.928789 extend-filesystems[1565]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 24 00:34:34.937172 extend-filesystems[1519]: Resized filesystem in /dev/sda9 Apr 24 00:34:34.934415 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 00:34:34.936232 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 00:34:34.946776 coreos-metadata[1591]: Apr 24 00:34:34.946 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 00:34:34.985676 containerd[1559]: time="2026-04-24T00:34:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 24 00:34:34.988665 containerd[1559]: time="2026-04-24T00:34:34.988400026Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 24 00:34:35.012574 systemd-networkd[1444]: eth0: DHCPv4 address 172.236.108.90/24, gateway 172.236.108.1 acquired from 23.205.167.215 Apr 24 00:34:35.014222 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1444 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 00:34:35.018736 containerd[1559]: time="2026-04-24T00:34:35.018690036Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.82µs" Apr 24 00:34:35.019513 containerd[1559]: time="2026-04-24T00:34:35.019489907Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 24 00:34:35.019596 containerd[1559]: time="2026-04-24T00:34:35.019581117Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 24 00:34:35.020589 containerd[1559]: time="2026-04-24T00:34:35.020566218Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 24 00:34:35.020930 containerd[1559]: time="2026-04-24T00:34:35.020913388Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 24 00:34:35.021003 containerd[1559]: time="2026-04-24T00:34:35.020990898Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:34:35.021112 containerd[1559]: time="2026-04-24T00:34:35.021094179Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:34:35.021153 systemd-timesyncd[1471]: Network configuration changed, trying to establish connection. Apr 24 00:34:35.021391 containerd[1559]: time="2026-04-24T00:34:35.021377659Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:34:35.021797 containerd[1559]: time="2026-04-24T00:34:35.021775489Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:34:35.022200 containerd[1559]: time="2026-04-24T00:34:35.022183910Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:34:35.022253 containerd[1559]: time="2026-04-24T00:34:35.022240470Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:34:35.022302 containerd[1559]: time="2026-04-24T00:34:35.022290660Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 24 00:34:35.022478 containerd[1559]: time="2026-04-24T00:34:35.022457940Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 24 00:34:35.022956 containerd[1559]: time="2026-04-24T00:34:35.022937250Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:34:35.023212 containerd[1559]: time="2026-04-24T00:34:35.023196671Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:34:35.023459 containerd[1559]: time="2026-04-24T00:34:35.023377531Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 24 00:34:35.023459 containerd[1559]: time="2026-04-24T00:34:35.023412231Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 24 00:34:35.023940 containerd[1559]: time="2026-04-24T00:34:35.023774511Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 24 00:34:35.024683 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 00:34:35.024895 containerd[1559]: time="2026-04-24T00:34:35.024878552Z" level=info msg="metadata content store policy set" policy=shared Apr 24 00:34:35.029726 containerd[1559]: time="2026-04-24T00:34:35.029550127Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 24 00:34:35.029726 containerd[1559]: time="2026-04-24T00:34:35.029647717Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029673477Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029864867Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029897247Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029915737Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029935347Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029952347Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029966987Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029981567Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.029993807Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.030010477Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.030418118Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.030471328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.030508518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 24 00:34:35.031135 containerd[1559]: time="2026-04-24T00:34:35.030536688Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030554438Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030571718Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030589598Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030606978Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030627328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030644588Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030666048Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030736768Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 24 00:34:35.031406 containerd[1559]: time="2026-04-24T00:34:35.030753698Z" level=info msg="Start snapshots syncer" Apr 24 00:34:35.032917 containerd[1559]: time="2026-04-24T00:34:35.032414060Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 24 00:34:35.032917 containerd[1559]: time="2026-04-24T00:34:35.032768760Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 24 00:34:35.033246 containerd[1559]: time="2026-04-24T00:34:35.032817450Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034115552Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034298142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034322082Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034337722Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034353862Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034375142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034387832Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034398862Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 24 00:34:35.034688 containerd[1559]: time="2026-04-24T00:34:35.034421852Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 24 00:34:35.035492 containerd[1559]: time="2026-04-24T00:34:35.035283033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 24 00:34:35.035492 containerd[1559]: time="2026-04-24T00:34:35.035310713Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 24 00:34:35.035492 containerd[1559]: time="2026-04-24T00:34:35.035383503Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035408423Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035683773Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035705853Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035718353Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035733173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035758143Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035782613Z" level=info msg="runtime interface created" Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035791553Z" level=info msg="created NRI interface" Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035804983Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035821093Z" level=info msg="Connect containerd service" Apr 24 00:34:35.035896 containerd[1559]: time="2026-04-24T00:34:35.035850923Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 00:34:35.039365 containerd[1559]: time="2026-04-24T00:34:35.038669126Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:34:35.081554 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 00:34:35.081423 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 00:34:35.159429 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 00:34:35.164721 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 00:34:35.201536 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 00:34:35.201832 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 00:34:35.207081 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 00:34:35.212705 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 00:34:35.216506 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 00:34:35.217570 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1605 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 00:34:35.226581 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 00:34:35.248019 containerd[1559]: time="2026-04-24T00:34:35.247979365Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 00:34:35.248539 containerd[1559]: time="2026-04-24T00:34:35.248177066Z" level=info msg="Start subscribing containerd event" Apr 24 00:34:35.248680 containerd[1559]: time="2026-04-24T00:34:35.248637126Z" level=info msg="Start recovering state" Apr 24 00:34:35.248986 containerd[1559]: time="2026-04-24T00:34:35.248963186Z" level=info msg="Start event monitor" Apr 24 00:34:35.249104 containerd[1559]: time="2026-04-24T00:34:35.249084376Z" level=info msg="Start cni network conf syncer for default" Apr 24 00:34:35.249277 containerd[1559]: time="2026-04-24T00:34:35.249260197Z" level=info msg="Start streaming server" Apr 24 00:34:35.249706 containerd[1559]: time="2026-04-24T00:34:35.249427147Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 24 00:34:35.249780 containerd[1559]: time="2026-04-24T00:34:35.249571567Z" level=info msg="runtime interface starting up..." Apr 24 00:34:35.249929 containerd[1559]: time="2026-04-24T00:34:35.249858137Z" level=info msg="starting plugins..." Apr 24 00:34:35.249929 containerd[1559]: time="2026-04-24T00:34:35.249887527Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 24 00:34:35.250213 containerd[1559]: time="2026-04-24T00:34:35.248779486Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 00:34:35.250648 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 00:34:35.253131 containerd[1559]: time="2026-04-24T00:34:35.252993520Z" level=info msg="containerd successfully booted in 0.270879s" Apr 24 00:34:35.271595 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 00:34:36.076267 systemd-timesyncd[1471]: Contacted time server 216.144.228.179:123 (1.flatcar.pool.ntp.org). Apr 24 00:34:36.076384 systemd-timesyncd[1471]: Initial clock synchronization to Fri 2026-04-24 00:34:36.075941 UTC. Apr 24 00:34:36.076444 systemd-resolved[1446]: Clock change detected. Flushing caches. Apr 24 00:34:36.078010 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 00:34:36.084681 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 00:34:36.087735 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 00:34:36.122877 tar[1538]: linux-amd64/README.md Apr 24 00:34:36.154725 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 00:34:36.157521 polkitd[1632]: Started polkitd version 126 Apr 24 00:34:36.161902 polkitd[1632]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 00:34:36.162177 polkitd[1632]: Loading rules from directory /run/polkit-1/rules.d Apr 24 00:34:36.162229 polkitd[1632]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 24 00:34:36.162441 polkitd[1632]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 24 00:34:36.162471 polkitd[1632]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 24 00:34:36.162508 polkitd[1632]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 00:34:36.163268 polkitd[1632]: Finished loading, compiling and executing 2 rules Apr 24 00:34:36.163613 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 00:34:36.164000 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 00:34:36.164620 polkitd[1632]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 00:34:36.175541 systemd-hostnamed[1605]: Hostname set to <172-236-108-90> (transient) Apr 24 00:34:36.176180 systemd-resolved[1446]: System hostname changed to '172-236-108-90'. Apr 24 00:34:36.258060 coreos-metadata[1515]: Apr 24 00:34:36.258 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 00:34:36.348419 coreos-metadata[1515]: Apr 24 00:34:36.348 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 24 00:34:36.530875 coreos-metadata[1515]: Apr 24 00:34:36.530 INFO Fetch successful Apr 24 00:34:36.531105 coreos-metadata[1515]: Apr 24 00:34:36.531 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 24 00:34:36.759448 coreos-metadata[1591]: Apr 24 00:34:36.759 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 00:34:36.795560 coreos-metadata[1515]: Apr 24 00:34:36.795 INFO Fetch successful Apr 24 00:34:36.871616 coreos-metadata[1591]: Apr 24 00:34:36.871 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 24 00:34:36.913001 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 00:34:36.914912 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 00:34:36.952415 systemd-networkd[1444]: eth0: Gained IPv6LL Apr 24 00:34:36.956055 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 00:34:36.957665 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 00:34:36.961362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:34:36.964547 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 00:34:36.998306 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 00:34:37.008121 coreos-metadata[1591]: Apr 24 00:34:37.007 INFO Fetch successful Apr 24 00:34:37.045874 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:34:37.045765 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 00:34:37.049208 systemd[1]: Finished sshkeys.service. Apr 24 00:34:37.870997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:34:37.872307 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 00:34:37.929645 systemd[1]: Startup finished in 3.044s (kernel) + 8.696s (initrd) + 5.831s (userspace) = 17.572s. Apr 24 00:34:37.937635 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:34:38.389750 kubelet[1691]: E0424 00:34:38.389687 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:34:38.393560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:34:38.393761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:34:38.394150 systemd[1]: kubelet.service: Consumed 823ms CPU time, 255.2M memory peak. Apr 24 00:34:38.406229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 00:34:38.407421 systemd[1]: Started sshd@0-172.236.108.90:22-20.229.252.112:52526.service - OpenSSH per-connection server daemon (20.229.252.112:52526). Apr 24 00:34:38.982634 sshd[1703]: Accepted publickey for core from 20.229.252.112 port 52526 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:38.984792 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:38.992802 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 00:34:38.994533 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 00:34:39.006376 systemd-logind[1530]: New session 1 of user core. Apr 24 00:34:39.020201 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 00:34:39.024132 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 00:34:39.042501 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 00:34:39.045682 systemd-logind[1530]: New session c1 of user core. Apr 24 00:34:39.199715 systemd[1708]: Queued start job for default target default.target. Apr 24 00:34:39.215782 systemd[1708]: Created slice app.slice - User Application Slice. Apr 24 00:34:39.215819 systemd[1708]: Reached target paths.target - Paths. Apr 24 00:34:39.215891 systemd[1708]: Reached target timers.target - Timers. Apr 24 00:34:39.217863 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 00:34:39.236617 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 00:34:39.236784 systemd[1708]: Reached target sockets.target - Sockets. Apr 24 00:34:39.236834 systemd[1708]: Reached target basic.target - Basic System. Apr 24 00:34:39.236904 systemd[1708]: Reached target default.target - Main User Target. Apr 24 00:34:39.236943 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 00:34:39.236953 systemd[1708]: Startup finished in 182ms. Apr 24 00:34:39.248422 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 00:34:39.565124 systemd[1]: Started sshd@1-172.236.108.90:22-20.229.252.112:52528.service - OpenSSH per-connection server daemon (20.229.252.112:52528). Apr 24 00:34:40.088710 sshd[1719]: Accepted publickey for core from 20.229.252.112 port 52528 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:40.090791 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:40.096355 systemd-logind[1530]: New session 2 of user core. Apr 24 00:34:40.104471 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 00:34:40.385389 sshd[1722]: Connection closed by 20.229.252.112 port 52528 Apr 24 00:34:40.385488 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Apr 24 00:34:40.389334 systemd[1]: sshd@1-172.236.108.90:22-20.229.252.112:52528.service: Deactivated successfully. Apr 24 00:34:40.391230 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 00:34:40.392061 systemd-logind[1530]: Session 2 logged out. Waiting for processes to exit. Apr 24 00:34:40.393638 systemd-logind[1530]: Removed session 2. Apr 24 00:34:40.494804 systemd[1]: Started sshd@2-172.236.108.90:22-20.229.252.112:52534.service - OpenSSH per-connection server daemon (20.229.252.112:52534). Apr 24 00:34:41.017234 sshd[1728]: Accepted publickey for core from 20.229.252.112 port 52534 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:41.018194 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:41.024881 systemd-logind[1530]: New session 3 of user core. Apr 24 00:34:41.031699 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 00:34:41.306909 sshd[1731]: Connection closed by 20.229.252.112 port 52534 Apr 24 00:34:41.308616 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Apr 24 00:34:41.313643 systemd[1]: sshd@2-172.236.108.90:22-20.229.252.112:52534.service: Deactivated successfully. Apr 24 00:34:41.316124 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 00:34:41.317405 systemd-logind[1530]: Session 3 logged out. Waiting for processes to exit. Apr 24 00:34:41.319097 systemd-logind[1530]: Removed session 3. Apr 24 00:34:41.438885 systemd[1]: Started sshd@3-172.236.108.90:22-20.229.252.112:52548.service - OpenSSH per-connection server daemon (20.229.252.112:52548). Apr 24 00:34:41.986020 sshd[1737]: Accepted publickey for core from 20.229.252.112 port 52548 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:41.986789 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:41.994345 systemd-logind[1530]: New session 4 of user core. Apr 24 00:34:42.001506 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 00:34:42.297562 sshd[1740]: Connection closed by 20.229.252.112 port 52548 Apr 24 00:34:42.299489 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Apr 24 00:34:42.303094 systemd[1]: sshd@3-172.236.108.90:22-20.229.252.112:52548.service: Deactivated successfully. Apr 24 00:34:42.305596 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 00:34:42.306841 systemd-logind[1530]: Session 4 logged out. Waiting for processes to exit. Apr 24 00:34:42.308242 systemd-logind[1530]: Removed session 4. Apr 24 00:34:42.407254 systemd[1]: Started sshd@4-172.236.108.90:22-20.229.252.112:52564.service - OpenSSH per-connection server daemon (20.229.252.112:52564). Apr 24 00:34:42.931890 sshd[1746]: Accepted publickey for core from 20.229.252.112 port 52564 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:42.933530 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:42.939647 systemd-logind[1530]: New session 5 of user core. Apr 24 00:34:42.942429 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 00:34:43.139185 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 00:34:43.139537 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:34:43.156939 sudo[1750]: pam_unix(sudo:session): session closed for user root Apr 24 00:34:43.253204 sshd[1749]: Connection closed by 20.229.252.112 port 52564 Apr 24 00:34:43.254569 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Apr 24 00:34:43.259332 systemd-logind[1530]: Session 5 logged out. Waiting for processes to exit. Apr 24 00:34:43.260031 systemd[1]: sshd@4-172.236.108.90:22-20.229.252.112:52564.service: Deactivated successfully. Apr 24 00:34:43.262423 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 00:34:43.264128 systemd-logind[1530]: Removed session 5. Apr 24 00:34:43.362339 systemd[1]: Started sshd@5-172.236.108.90:22-20.229.252.112:52578.service - OpenSSH per-connection server daemon (20.229.252.112:52578). Apr 24 00:34:43.905748 sshd[1756]: Accepted publickey for core from 20.229.252.112 port 52578 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:43.907787 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:43.915701 systemd-logind[1530]: New session 6 of user core. Apr 24 00:34:43.923605 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 00:34:44.109516 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 00:34:44.109992 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:34:44.115957 sudo[1761]: pam_unix(sudo:session): session closed for user root Apr 24 00:34:44.123649 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 24 00:34:44.124139 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:34:44.138886 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:34:44.194470 augenrules[1783]: No rules Apr 24 00:34:44.195201 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:34:44.195652 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:34:44.197510 sudo[1760]: pam_unix(sudo:session): session closed for user root Apr 24 00:34:44.295568 sshd[1759]: Connection closed by 20.229.252.112 port 52578 Apr 24 00:34:44.296584 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Apr 24 00:34:44.303573 systemd-logind[1530]: Session 6 logged out. Waiting for processes to exit. Apr 24 00:34:44.304545 systemd[1]: sshd@5-172.236.108.90:22-20.229.252.112:52578.service: Deactivated successfully. Apr 24 00:34:44.307839 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 00:34:44.310930 systemd-logind[1530]: Removed session 6. Apr 24 00:34:44.404962 systemd[1]: Started sshd@6-172.236.108.90:22-20.229.252.112:52588.service - OpenSSH per-connection server daemon (20.229.252.112:52588). Apr 24 00:34:44.942030 sshd[1792]: Accepted publickey for core from 20.229.252.112 port 52588 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:34:44.943805 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:34:44.949111 systemd-logind[1530]: New session 7 of user core. Apr 24 00:34:44.953436 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 00:34:45.143395 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 00:34:45.143808 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:34:45.510135 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 00:34:45.522867 (dockerd)[1813]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 00:34:45.814916 dockerd[1813]: time="2026-04-24T00:34:45.814754204Z" level=info msg="Starting up" Apr 24 00:34:45.818382 dockerd[1813]: time="2026-04-24T00:34:45.817690577Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 24 00:34:45.836104 dockerd[1813]: time="2026-04-24T00:34:45.836048875Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 24 00:34:45.863338 systemd[1]: var-lib-docker-metacopy\x2dcheck1867861176-merged.mount: Deactivated successfully. Apr 24 00:34:45.891650 dockerd[1813]: time="2026-04-24T00:34:45.891610111Z" level=info msg="Loading containers: start." Apr 24 00:34:45.904376 kernel: Initializing XFRM netlink socket Apr 24 00:34:46.227766 systemd-networkd[1444]: docker0: Link UP Apr 24 00:34:46.231224 dockerd[1813]: time="2026-04-24T00:34:46.231172210Z" level=info msg="Loading containers: done." Apr 24 00:34:46.250758 dockerd[1813]: time="2026-04-24T00:34:46.250690900Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 00:34:46.250928 dockerd[1813]: time="2026-04-24T00:34:46.250802800Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 24 00:34:46.250928 dockerd[1813]: time="2026-04-24T00:34:46.250898570Z" level=info msg="Initializing buildkit" Apr 24 00:34:46.283221 dockerd[1813]: time="2026-04-24T00:34:46.282958282Z" level=info msg="Completed buildkit initialization" Apr 24 00:34:46.291751 dockerd[1813]: time="2026-04-24T00:34:46.291700961Z" level=info msg="Daemon has completed initialization" Apr 24 00:34:46.292058 dockerd[1813]: time="2026-04-24T00:34:46.291970111Z" level=info msg="API listen on /run/docker.sock" Apr 24 00:34:46.292164 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 00:34:46.828937 containerd[1559]: time="2026-04-24T00:34:46.828892228Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 24 00:34:46.849955 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2565105748-merged.mount: Deactivated successfully. Apr 24 00:34:47.427368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200906311.mount: Deactivated successfully. Apr 24 00:34:48.610772 containerd[1559]: time="2026-04-24T00:34:48.610701549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:48.611695 containerd[1559]: time="2026-04-24T00:34:48.611593850Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27579429" Apr 24 00:34:48.612178 containerd[1559]: time="2026-04-24T00:34:48.612145721Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:48.614463 containerd[1559]: time="2026-04-24T00:34:48.614425773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:48.616163 containerd[1559]: time="2026-04-24T00:34:48.615425754Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.786495446s" Apr 24 00:34:48.616163 containerd[1559]: time="2026-04-24T00:34:48.615461234Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 24 00:34:48.616331 containerd[1559]: time="2026-04-24T00:34:48.616273175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 24 00:34:48.644531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 00:34:48.647499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:34:48.820757 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:34:48.830619 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:34:48.871346 kubelet[2090]: E0424 00:34:48.871192 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:34:48.876441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:34:48.876651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:34:48.877216 systemd[1]: kubelet.service: Consumed 194ms CPU time, 110.6M memory peak. Apr 24 00:34:49.803219 containerd[1559]: time="2026-04-24T00:34:49.803147312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:49.804541 containerd[1559]: time="2026-04-24T00:34:49.804509163Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451665" Apr 24 00:34:49.804618 containerd[1559]: time="2026-04-24T00:34:49.804590413Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:49.806843 containerd[1559]: time="2026-04-24T00:34:49.806821875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:49.808255 containerd[1559]: time="2026-04-24T00:34:49.808217387Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 1.191879792s" Apr 24 00:34:49.808330 containerd[1559]: time="2026-04-24T00:34:49.808259007Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 24 00:34:49.809179 containerd[1559]: time="2026-04-24T00:34:49.809147668Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 24 00:34:50.780218 containerd[1559]: time="2026-04-24T00:34:50.780158718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:50.780791 containerd[1559]: time="2026-04-24T00:34:50.780769369Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555296" Apr 24 00:34:50.781642 containerd[1559]: time="2026-04-24T00:34:50.781623120Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:50.784064 containerd[1559]: time="2026-04-24T00:34:50.784041482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:50.784981 containerd[1559]: time="2026-04-24T00:34:50.784937073Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 975.750805ms" Apr 24 00:34:50.785039 containerd[1559]: time="2026-04-24T00:34:50.784981543Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 24 00:34:50.785891 containerd[1559]: time="2026-04-24T00:34:50.785789904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 24 00:34:51.752303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737228232.mount: Deactivated successfully. Apr 24 00:34:52.018693 containerd[1559]: time="2026-04-24T00:34:52.018523526Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:52.019932 containerd[1559]: time="2026-04-24T00:34:52.019740808Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699931" Apr 24 00:34:52.020710 containerd[1559]: time="2026-04-24T00:34:52.020664739Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:52.027084 containerd[1559]: time="2026-04-24T00:34:52.027023835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:52.027493 containerd[1559]: time="2026-04-24T00:34:52.027450425Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 1.241527871s" Apr 24 00:34:52.027493 containerd[1559]: time="2026-04-24T00:34:52.027490025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 24 00:34:52.028908 containerd[1559]: time="2026-04-24T00:34:52.028858437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 24 00:34:52.599566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2817757213.mount: Deactivated successfully. Apr 24 00:34:53.535618 containerd[1559]: time="2026-04-24T00:34:53.535528143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:53.537127 containerd[1559]: time="2026-04-24T00:34:53.537026285Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556548" Apr 24 00:34:53.537773 containerd[1559]: time="2026-04-24T00:34:53.537736085Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:53.542075 containerd[1559]: time="2026-04-24T00:34:53.541681629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:53.543253 containerd[1559]: time="2026-04-24T00:34:53.543042081Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.514140204s" Apr 24 00:34:53.543253 containerd[1559]: time="2026-04-24T00:34:53.543081271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 24 00:34:53.543643 containerd[1559]: time="2026-04-24T00:34:53.543611611Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 24 00:34:54.051802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175192323.mount: Deactivated successfully. Apr 24 00:34:54.060338 containerd[1559]: time="2026-04-24T00:34:54.060045837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:54.061023 containerd[1559]: time="2026-04-24T00:34:54.060968538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 24 00:34:54.062330 containerd[1559]: time="2026-04-24T00:34:54.061734229Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:54.064079 containerd[1559]: time="2026-04-24T00:34:54.064035121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:54.065340 containerd[1559]: time="2026-04-24T00:34:54.064871682Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 521.223901ms" Apr 24 00:34:54.065340 containerd[1559]: time="2026-04-24T00:34:54.064907882Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 24 00:34:54.065663 containerd[1559]: time="2026-04-24T00:34:54.065634303Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 24 00:34:54.610744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3864558223.mount: Deactivated successfully. Apr 24 00:34:55.367758 containerd[1559]: time="2026-04-24T00:34:55.367685365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:55.369360 containerd[1559]: time="2026-04-24T00:34:55.369318636Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644471" Apr 24 00:34:55.370937 containerd[1559]: time="2026-04-24T00:34:55.370479988Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:55.372683 containerd[1559]: time="2026-04-24T00:34:55.372639230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:34:55.374233 containerd[1559]: time="2026-04-24T00:34:55.373479341Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.307813408s" Apr 24 00:34:55.374233 containerd[1559]: time="2026-04-24T00:34:55.373530511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 24 00:34:56.603598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:34:56.604664 systemd[1]: kubelet.service: Consumed 194ms CPU time, 110.6M memory peak. Apr 24 00:34:56.607304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:34:56.651564 systemd[1]: Reload requested from client PID 2261 ('systemctl') (unit session-7.scope)... Apr 24 00:34:56.651581 systemd[1]: Reloading... Apr 24 00:34:56.834324 zram_generator::config[2311]: No configuration found. Apr 24 00:34:57.070078 systemd[1]: Reloading finished in 418 ms. Apr 24 00:34:57.122993 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 00:34:57.123107 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 00:34:57.123437 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:34:57.123484 systemd[1]: kubelet.service: Consumed 155ms CPU time, 98.3M memory peak. Apr 24 00:34:57.125038 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:34:57.406448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:34:57.414651 (kubelet)[2359]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:34:57.466309 kubelet[2359]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:34:57.678336 kubelet[2359]: I0424 00:34:57.678195 2359 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 00:34:57.678336 kubelet[2359]: I0424 00:34:57.678247 2359 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:34:57.678336 kubelet[2359]: I0424 00:34:57.678267 2359 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:34:57.678336 kubelet[2359]: I0424 00:34:57.678271 2359 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:34:57.678780 kubelet[2359]: I0424 00:34:57.678582 2359 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 00:34:57.687317 kubelet[2359]: E0424 00:34:57.686421 2359 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.236.108.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.108.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:34:57.687317 kubelet[2359]: I0424 00:34:57.686842 2359 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:34:57.692134 kubelet[2359]: I0424 00:34:57.692108 2359 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:34:57.695946 kubelet[2359]: I0424 00:34:57.695926 2359 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:34:57.697543 kubelet[2359]: I0424 00:34:57.697507 2359 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:34:57.697706 kubelet[2359]: I0424 00:34:57.697543 2359 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-108-90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:34:57.697809 kubelet[2359]: I0424 00:34:57.697706 2359 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 00:34:57.697809 kubelet[2359]: I0424 00:34:57.697714 2359 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 00:34:57.697809 kubelet[2359]: I0424 00:34:57.697808 2359 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:34:57.699575 kubelet[2359]: I0424 00:34:57.699556 2359 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 00:34:57.699739 kubelet[2359]: I0424 00:34:57.699725 2359 kubelet.go:482] "Attempting to sync node with API server" Apr 24 00:34:57.699771 kubelet[2359]: I0424 00:34:57.699741 2359 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:34:57.699771 kubelet[2359]: I0424 00:34:57.699764 2359 kubelet.go:394] "Adding apiserver pod source" Apr 24 00:34:57.699819 kubelet[2359]: I0424 00:34:57.699773 2359 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:34:57.704076 kubelet[2359]: I0424 00:34:57.704057 2359 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:34:57.706458 kubelet[2359]: I0424 00:34:57.706441 2359 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:34:57.707310 kubelet[2359]: I0424 00:34:57.706536 2359 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:34:57.707310 kubelet[2359]: W0424 00:34:57.706594 2359 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 00:34:57.709440 kubelet[2359]: I0424 00:34:57.709127 2359 server.go:1257] "Started kubelet" Apr 24 00:34:57.710757 kubelet[2359]: I0424 00:34:57.710726 2359 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 00:34:57.715201 kubelet[2359]: E0424 00:34:57.714000 2359 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.108.90:6443/api/v1/namespaces/default/events\": dial tcp 172.236.108.90:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-108-90.18a923ce7b0da452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-108-90,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-108-90,},FirstTimestamp:2026-04-24 00:34:57.709098066 +0000 UTC m=+0.290002651,LastTimestamp:2026-04-24 00:34:57.709098066 +0000 UTC m=+0.290002651,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-108-90,}" Apr 24 00:34:57.716690 kubelet[2359]: I0424 00:34:57.716170 2359 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:34:57.717398 kubelet[2359]: I0424 00:34:57.717369 2359 server.go:317] "Adding debug handlers to kubelet server" Apr 24 00:34:57.720485 kubelet[2359]: I0424 00:34:57.720452 2359 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 00:34:57.720816 kubelet[2359]: I0424 00:34:57.720770 2359 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:34:57.720890 kubelet[2359]: I0424 00:34:57.720831 2359 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:34:57.720943 kubelet[2359]: E0424 00:34:57.720804 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:34:57.721030 kubelet[2359]: I0424 00:34:57.721014 2359 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:34:57.721230 kubelet[2359]: I0424 00:34:57.721212 2359 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:34:57.721646 kubelet[2359]: I0424 00:34:57.721633 2359 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:34:57.721774 kubelet[2359]: I0424 00:34:57.721762 2359 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:34:57.723233 kubelet[2359]: E0424 00:34:57.723116 2359 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-90?timeout=10s\": dial tcp 172.236.108.90:6443: connect: connection refused" interval="200ms" Apr 24 00:34:57.724152 kubelet[2359]: I0424 00:34:57.723806 2359 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:34:57.724739 kubelet[2359]: E0424 00:34:57.724722 2359 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:34:57.725710 kubelet[2359]: I0424 00:34:57.725694 2359 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:34:57.725985 kubelet[2359]: I0424 00:34:57.725960 2359 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:34:57.751821 kubelet[2359]: I0424 00:34:57.751799 2359 cpu_manager.go:225] "Starting" policy="none" Apr 24 00:34:57.752249 kubelet[2359]: I0424 00:34:57.752234 2359 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 00:34:57.752368 kubelet[2359]: I0424 00:34:57.752356 2359 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 00:34:57.753753 kubelet[2359]: I0424 00:34:57.753274 2359 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:34:57.755125 kubelet[2359]: I0424 00:34:57.755090 2359 policy_none.go:50] "Start" Apr 24 00:34:57.755296 kubelet[2359]: I0424 00:34:57.755207 2359 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:34:57.755296 kubelet[2359]: I0424 00:34:57.755223 2359 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:34:57.755398 kubelet[2359]: I0424 00:34:57.755375 2359 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:34:57.755398 kubelet[2359]: I0424 00:34:57.755398 2359 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 00:34:57.755444 kubelet[2359]: I0424 00:34:57.755416 2359 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 00:34:57.755568 kubelet[2359]: E0424 00:34:57.755462 2359 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:34:57.757148 kubelet[2359]: I0424 00:34:57.757137 2359 policy_none.go:44] "Start" Apr 24 00:34:57.765341 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 00:34:57.785160 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 00:34:57.788828 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 00:34:57.801210 kubelet[2359]: E0424 00:34:57.800614 2359 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:34:57.801210 kubelet[2359]: I0424 00:34:57.800862 2359 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 00:34:57.801210 kubelet[2359]: I0424 00:34:57.800874 2359 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:34:57.801977 kubelet[2359]: I0424 00:34:57.801594 2359 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 00:34:57.804993 kubelet[2359]: E0424 00:34:57.804970 2359 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:34:57.805040 kubelet[2359]: E0424 00:34:57.805010 2359 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-108-90\" not found" Apr 24 00:34:57.867546 systemd[1]: Created slice kubepods-burstable-pod12f124768468448e1d615f8e9fba6c2e.slice - libcontainer container kubepods-burstable-pod12f124768468448e1d615f8e9fba6c2e.slice. Apr 24 00:34:57.883827 kubelet[2359]: E0424 00:34:57.883794 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:34:57.887604 systemd[1]: Created slice kubepods-burstable-pod535dfd9d383a7e03397eedd720198c11.slice - libcontainer container kubepods-burstable-pod535dfd9d383a7e03397eedd720198c11.slice. Apr 24 00:34:57.898772 kubelet[2359]: E0424 00:34:57.898746 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:34:57.900775 systemd[1]: Created slice kubepods-burstable-poda6f3e1e2ac9b4396d656fb945d7fc20d.slice - libcontainer container kubepods-burstable-poda6f3e1e2ac9b4396d656fb945d7fc20d.slice. Apr 24 00:34:57.902704 kubelet[2359]: E0424 00:34:57.902678 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:34:57.903219 kubelet[2359]: I0424 00:34:57.903204 2359 kubelet_node_status.go:74] "Attempting to register node" node="172-236-108-90" Apr 24 00:34:57.903602 kubelet[2359]: E0424 00:34:57.903578 2359 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.108.90:6443/api/v1/nodes\": dial tcp 172.236.108.90:6443: connect: connection refused" node="172-236-108-90" Apr 24 00:34:57.924115 kubelet[2359]: I0424 00:34:57.923853 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-flexvolume-dir\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:34:57.924115 kubelet[2359]: I0424 00:34:57.923895 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-k8s-certs\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:34:57.924115 kubelet[2359]: I0424 00:34:57.923915 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:34:57.924115 kubelet[2359]: I0424 00:34:57.923933 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12f124768468448e1d615f8e9fba6c2e-ca-certs\") pod \"kube-apiserver-172-236-108-90\" (UID: \"12f124768468448e1d615f8e9fba6c2e\") " pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:34:57.924115 kubelet[2359]: I0424 00:34:57.923951 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12f124768468448e1d615f8e9fba6c2e-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-108-90\" (UID: \"12f124768468448e1d615f8e9fba6c2e\") " pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:34:57.924341 kubelet[2359]: I0424 00:34:57.923965 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-ca-certs\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:34:57.924341 kubelet[2359]: I0424 00:34:57.923979 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-kubeconfig\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:34:57.924341 kubelet[2359]: I0424 00:34:57.923993 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6f3e1e2ac9b4396d656fb945d7fc20d-kubeconfig\") pod \"kube-scheduler-172-236-108-90\" (UID: \"a6f3e1e2ac9b4396d656fb945d7fc20d\") " pod="kube-system/kube-scheduler-172-236-108-90" Apr 24 00:34:57.924341 kubelet[2359]: I0424 00:34:57.924007 2359 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12f124768468448e1d615f8e9fba6c2e-k8s-certs\") pod \"kube-apiserver-172-236-108-90\" (UID: \"12f124768468448e1d615f8e9fba6c2e\") " pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:34:57.924341 kubelet[2359]: E0424 00:34:57.924182 2359 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-90?timeout=10s\": dial tcp 172.236.108.90:6443: connect: connection refused" interval="400ms" Apr 24 00:34:58.107116 kubelet[2359]: I0424 00:34:58.106988 2359 kubelet_node_status.go:74] "Attempting to register node" node="172-236-108-90" Apr 24 00:34:58.109669 kubelet[2359]: E0424 00:34:58.109627 2359 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.108.90:6443/api/v1/nodes\": dial tcp 172.236.108.90:6443: connect: connection refused" node="172-236-108-90" Apr 24 00:34:58.188089 kubelet[2359]: E0424 00:34:58.187190 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:58.189683 containerd[1559]: time="2026-04-24T00:34:58.189599406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-108-90,Uid:12f124768468448e1d615f8e9fba6c2e,Namespace:kube-system,Attempt:0,}" Apr 24 00:34:58.201840 kubelet[2359]: E0424 00:34:58.201803 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:58.202597 containerd[1559]: time="2026-04-24T00:34:58.202540749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-108-90,Uid:535dfd9d383a7e03397eedd720198c11,Namespace:kube-system,Attempt:0,}" Apr 24 00:34:58.204575 kubelet[2359]: E0424 00:34:58.204542 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:58.205107 containerd[1559]: time="2026-04-24T00:34:58.205056622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-108-90,Uid:a6f3e1e2ac9b4396d656fb945d7fc20d,Namespace:kube-system,Attempt:0,}" Apr 24 00:34:58.325812 kubelet[2359]: E0424 00:34:58.325750 2359 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-90?timeout=10s\": dial tcp 172.236.108.90:6443: connect: connection refused" interval="800ms" Apr 24 00:34:58.515412 kubelet[2359]: I0424 00:34:58.515273 2359 kubelet_node_status.go:74] "Attempting to register node" node="172-236-108-90" Apr 24 00:34:58.515930 kubelet[2359]: E0424 00:34:58.515684 2359 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.108.90:6443/api/v1/nodes\": dial tcp 172.236.108.90:6443: connect: connection refused" node="172-236-108-90" Apr 24 00:34:58.842745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2421167607.mount: Deactivated successfully. Apr 24 00:34:58.884376 containerd[1559]: time="2026-04-24T00:34:58.883158040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:34:58.899875 containerd[1559]: time="2026-04-24T00:34:58.899499576Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 24 00:34:58.908379 containerd[1559]: time="2026-04-24T00:34:58.908088084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:34:58.911730 containerd[1559]: time="2026-04-24T00:34:58.911690518Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:34:58.914309 containerd[1559]: time="2026-04-24T00:34:58.913985770Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:34:58.915535 containerd[1559]: time="2026-04-24T00:34:58.915485542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:34:58.919716 containerd[1559]: time="2026-04-24T00:34:58.918299135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 24 00:34:58.926299 containerd[1559]: time="2026-04-24T00:34:58.924084510Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:34:58.926863 containerd[1559]: time="2026-04-24T00:34:58.926519953Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 719.53458ms" Apr 24 00:34:58.930508 containerd[1559]: time="2026-04-24T00:34:58.927570534Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 722.689753ms" Apr 24 00:34:58.930508 containerd[1559]: time="2026-04-24T00:34:58.929647816Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 737.699498ms" Apr 24 00:34:59.015345 containerd[1559]: time="2026-04-24T00:34:59.015122611Z" level=info msg="connecting to shim 425c9984f56123b80a47ed193227c203b327f4fd4d38413c668091f22b61f7f9" address="unix:///run/containerd/s/e73698fdd0577c5aa8f8fde820add6b7b24c404d8201bab411b6a209b55e1f17" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:34:59.054984 containerd[1559]: time="2026-04-24T00:34:59.054924711Z" level=info msg="connecting to shim a5fc9d21f2eb9981f460948a4959a3dc514fc15b2cca24afb3c2edbb029aac3d" address="unix:///run/containerd/s/5fc892d62b084f585862b836d1905b5cc6f0246c316a31bff248a2c1ae551c9f" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:34:59.064565 containerd[1559]: time="2026-04-24T00:34:59.064500131Z" level=info msg="connecting to shim 2495eb54bc6c7ac5d2805c93a51413cfe5f518308b98a833058a0dbc8cd65622" address="unix:///run/containerd/s/fcd93a0c98538272581c35a49664563a40cbf6a94ef7f0a94374bd0f604b1d00" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:34:59.079910 systemd[1]: Started cri-containerd-425c9984f56123b80a47ed193227c203b327f4fd4d38413c668091f22b61f7f9.scope - libcontainer container 425c9984f56123b80a47ed193227c203b327f4fd4d38413c668091f22b61f7f9. Apr 24 00:34:59.125971 systemd[1]: Started cri-containerd-a5fc9d21f2eb9981f460948a4959a3dc514fc15b2cca24afb3c2edbb029aac3d.scope - libcontainer container a5fc9d21f2eb9981f460948a4959a3dc514fc15b2cca24afb3c2edbb029aac3d. Apr 24 00:34:59.129384 kubelet[2359]: E0424 00:34:59.129322 2359 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.108.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-108-90?timeout=10s\": dial tcp 172.236.108.90:6443: connect: connection refused" interval="1.6s" Apr 24 00:34:59.138801 systemd[1]: Started cri-containerd-2495eb54bc6c7ac5d2805c93a51413cfe5f518308b98a833058a0dbc8cd65622.scope - libcontainer container 2495eb54bc6c7ac5d2805c93a51413cfe5f518308b98a833058a0dbc8cd65622. Apr 24 00:34:59.226703 containerd[1559]: time="2026-04-24T00:34:59.226627783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-108-90,Uid:535dfd9d383a7e03397eedd720198c11,Namespace:kube-system,Attempt:0,} returns sandbox id \"425c9984f56123b80a47ed193227c203b327f4fd4d38413c668091f22b61f7f9\"" Apr 24 00:34:59.229909 kubelet[2359]: E0424 00:34:59.229868 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:59.246651 containerd[1559]: time="2026-04-24T00:34:59.246603013Z" level=info msg="CreateContainer within sandbox \"425c9984f56123b80a47ed193227c203b327f4fd4d38413c668091f22b61f7f9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 00:34:59.253186 containerd[1559]: time="2026-04-24T00:34:59.253114449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-108-90,Uid:12f124768468448e1d615f8e9fba6c2e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5fc9d21f2eb9981f460948a4959a3dc514fc15b2cca24afb3c2edbb029aac3d\"" Apr 24 00:34:59.255317 kubelet[2359]: E0424 00:34:59.255013 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:59.265187 containerd[1559]: time="2026-04-24T00:34:59.265072241Z" level=info msg="CreateContainer within sandbox \"a5fc9d21f2eb9981f460948a4959a3dc514fc15b2cca24afb3c2edbb029aac3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 00:34:59.272517 containerd[1559]: time="2026-04-24T00:34:59.272471299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-108-90,Uid:a6f3e1e2ac9b4396d656fb945d7fc20d,Namespace:kube-system,Attempt:0,} returns sandbox id \"2495eb54bc6c7ac5d2805c93a51413cfe5f518308b98a833058a0dbc8cd65622\"" Apr 24 00:34:59.274752 kubelet[2359]: E0424 00:34:59.274707 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:59.281461 containerd[1559]: time="2026-04-24T00:34:59.281413488Z" level=info msg="CreateContainer within sandbox \"2495eb54bc6c7ac5d2805c93a51413cfe5f518308b98a833058a0dbc8cd65622\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 00:34:59.282403 containerd[1559]: time="2026-04-24T00:34:59.281499208Z" level=info msg="Container dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:34:59.318658 kubelet[2359]: I0424 00:34:59.318607 2359 kubelet_node_status.go:74] "Attempting to register node" node="172-236-108-90" Apr 24 00:34:59.319064 kubelet[2359]: E0424 00:34:59.319029 2359 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://172.236.108.90:6443/api/v1/nodes\": dial tcp 172.236.108.90:6443: connect: connection refused" node="172-236-108-90" Apr 24 00:34:59.334838 containerd[1559]: time="2026-04-24T00:34:59.334780321Z" level=info msg="CreateContainer within sandbox \"425c9984f56123b80a47ed193227c203b327f4fd4d38413c668091f22b61f7f9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d\"" Apr 24 00:34:59.343521 containerd[1559]: time="2026-04-24T00:34:59.341903138Z" level=info msg="StartContainer for \"dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d\"" Apr 24 00:34:59.345668 containerd[1559]: time="2026-04-24T00:34:59.345631442Z" level=info msg="connecting to shim dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d" address="unix:///run/containerd/s/e73698fdd0577c5aa8f8fde820add6b7b24c404d8201bab411b6a209b55e1f17" protocol=ttrpc version=3 Apr 24 00:34:59.367835 containerd[1559]: time="2026-04-24T00:34:59.367780434Z" level=info msg="Container 64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:34:59.392833 containerd[1559]: time="2026-04-24T00:34:59.391543848Z" level=info msg="Container b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:34:59.395003 systemd[1]: Started cri-containerd-dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d.scope - libcontainer container dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d. Apr 24 00:34:59.424078 containerd[1559]: time="2026-04-24T00:34:59.423945880Z" level=info msg="CreateContainer within sandbox \"a5fc9d21f2eb9981f460948a4959a3dc514fc15b2cca24afb3c2edbb029aac3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d\"" Apr 24 00:34:59.425494 containerd[1559]: time="2026-04-24T00:34:59.425342012Z" level=info msg="StartContainer for \"64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d\"" Apr 24 00:34:59.429616 containerd[1559]: time="2026-04-24T00:34:59.429557316Z" level=info msg="connecting to shim 64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d" address="unix:///run/containerd/s/5fc892d62b084f585862b836d1905b5cc6f0246c316a31bff248a2c1ae551c9f" protocol=ttrpc version=3 Apr 24 00:34:59.436891 containerd[1559]: time="2026-04-24T00:34:59.436731233Z" level=info msg="CreateContainer within sandbox \"2495eb54bc6c7ac5d2805c93a51413cfe5f518308b98a833058a0dbc8cd65622\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1\"" Apr 24 00:34:59.439550 containerd[1559]: time="2026-04-24T00:34:59.439516066Z" level=info msg="StartContainer for \"b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1\"" Apr 24 00:34:59.441619 containerd[1559]: time="2026-04-24T00:34:59.441585528Z" level=info msg="connecting to shim b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1" address="unix:///run/containerd/s/fcd93a0c98538272581c35a49664563a40cbf6a94ef7f0a94374bd0f604b1d00" protocol=ttrpc version=3 Apr 24 00:34:59.475698 systemd[1]: Started cri-containerd-64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d.scope - libcontainer container 64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d. Apr 24 00:34:59.495903 systemd[1]: Started cri-containerd-b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1.scope - libcontainer container b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1. Apr 24 00:34:59.555449 containerd[1559]: time="2026-04-24T00:34:59.555132391Z" level=info msg="StartContainer for \"dfdf37f222097941f48b3edb9aed4d2301b261ee56448e3b5f0a31ef7fbe844d\" returns successfully" Apr 24 00:34:59.617050 containerd[1559]: time="2026-04-24T00:34:59.616906033Z" level=info msg="StartContainer for \"64a0291eb3b59061a306170c57e1ade2549f9c8eabca2be8a86c91f8bffb0c0d\" returns successfully" Apr 24 00:34:59.668014 containerd[1559]: time="2026-04-24T00:34:59.667844354Z" level=info msg="StartContainer for \"b725e27385e0380bfd881eec7d7a83d375fbb1bc290e9e42320afac9907f0ac1\" returns successfully" Apr 24 00:34:59.774953 kubelet[2359]: E0424 00:34:59.774877 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:34:59.775611 kubelet[2359]: E0424 00:34:59.775078 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:59.776808 kubelet[2359]: E0424 00:34:59.776765 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:34:59.777541 kubelet[2359]: E0424 00:34:59.776910 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:34:59.780816 kubelet[2359]: E0424 00:34:59.780759 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:34:59.780947 kubelet[2359]: E0424 00:34:59.780923 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:00.787048 kubelet[2359]: E0424 00:35:00.785534 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:35:00.787048 kubelet[2359]: E0424 00:35:00.786476 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:00.787048 kubelet[2359]: E0424 00:35:00.786840 2359 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:35:00.787048 kubelet[2359]: E0424 00:35:00.786965 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:00.923073 kubelet[2359]: I0424 00:35:00.922949 2359 kubelet_node_status.go:74] "Attempting to register node" node="172-236-108-90" Apr 24 00:35:00.993350 kubelet[2359]: E0424 00:35:00.993311 2359 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-108-90\" not found" node="172-236-108-90" Apr 24 00:35:01.069229 kubelet[2359]: I0424 00:35:01.069102 2359 kubelet_node_status.go:77] "Successfully registered node" node="172-236-108-90" Apr 24 00:35:01.069229 kubelet[2359]: E0424 00:35:01.069150 2359 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"172-236-108-90\": node \"172-236-108-90\" not found" Apr 24 00:35:01.088431 kubelet[2359]: E0424 00:35:01.088392 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.188969 kubelet[2359]: E0424 00:35:01.188917 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.289374 kubelet[2359]: E0424 00:35:01.289335 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.390125 kubelet[2359]: E0424 00:35:01.389965 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.490474 kubelet[2359]: E0424 00:35:01.490380 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.591123 kubelet[2359]: E0424 00:35:01.591056 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.691838 kubelet[2359]: E0424 00:35:01.691693 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.792059 kubelet[2359]: E0424 00:35:01.792018 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.892194 kubelet[2359]: E0424 00:35:01.892128 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:01.992917 kubelet[2359]: E0424 00:35:01.992764 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:02.093826 kubelet[2359]: E0424 00:35:02.093661 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:02.194558 kubelet[2359]: E0424 00:35:02.194505 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:02.294883 kubelet[2359]: E0424 00:35:02.294761 2359 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"172-236-108-90\" not found" Apr 24 00:35:02.322328 kubelet[2359]: I0424 00:35:02.322273 2359 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:02.338527 kubelet[2359]: I0424 00:35:02.338489 2359 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:02.345038 kubelet[2359]: I0424 00:35:02.344998 2359 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-90" Apr 24 00:35:02.707192 kubelet[2359]: I0424 00:35:02.707062 2359 apiserver.go:52] "Watching apiserver" Apr 24 00:35:02.714229 kubelet[2359]: E0424 00:35:02.713812 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:02.714229 kubelet[2359]: E0424 00:35:02.713818 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:02.714420 kubelet[2359]: E0424 00:35:02.714241 2359 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:02.722606 kubelet[2359]: I0424 00:35:02.722576 2359 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:35:03.384158 systemd[1]: Reload requested from client PID 2643 ('systemctl') (unit session-7.scope)... Apr 24 00:35:03.384176 systemd[1]: Reloading... Apr 24 00:35:03.569362 zram_generator::config[2696]: No configuration found. Apr 24 00:35:03.855868 systemd[1]: Reloading finished in 471 ms. Apr 24 00:35:03.886675 kubelet[2359]: I0424 00:35:03.886478 2359 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:35:03.887040 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:35:03.898494 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 00:35:03.898814 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:35:03.898881 systemd[1]: kubelet.service: Consumed 790ms CPU time, 125.4M memory peak. Apr 24 00:35:03.903891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:35:04.114733 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:35:04.125115 (kubelet)[2738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:35:04.178851 kubelet[2738]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:35:04.189535 kubelet[2738]: I0424 00:35:04.189488 2738 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 24 00:35:04.189535 kubelet[2738]: I0424 00:35:04.189524 2738 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:35:04.189535 kubelet[2738]: I0424 00:35:04.189544 2738 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 24 00:35:04.189535 kubelet[2738]: I0424 00:35:04.189549 2738 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:35:04.189756 kubelet[2738]: I0424 00:35:04.189741 2738 server.go:951] "Client rotation is on, will bootstrap in background" Apr 24 00:35:04.190758 kubelet[2738]: I0424 00:35:04.190737 2738 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 00:35:04.192948 kubelet[2738]: I0424 00:35:04.192920 2738 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:35:04.197348 kubelet[2738]: I0424 00:35:04.196791 2738 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:35:04.201500 kubelet[2738]: I0424 00:35:04.201470 2738 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 24 00:35:04.201776 kubelet[2738]: I0424 00:35:04.201753 2738 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:35:04.201913 kubelet[2738]: I0424 00:35:04.201775 2738 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-108-90","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:35:04.201993 kubelet[2738]: I0424 00:35:04.201927 2738 topology_manager.go:143] "Creating topology manager with none policy" Apr 24 00:35:04.201993 kubelet[2738]: I0424 00:35:04.201936 2738 container_manager_linux.go:308] "Creating device plugin manager" Apr 24 00:35:04.201993 kubelet[2738]: I0424 00:35:04.201960 2738 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 24 00:35:04.202142 kubelet[2738]: I0424 00:35:04.202130 2738 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 24 00:35:04.204401 kubelet[2738]: I0424 00:35:04.202456 2738 kubelet.go:482] "Attempting to sync node with API server" Apr 24 00:35:04.204401 kubelet[2738]: I0424 00:35:04.202478 2738 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:35:04.204401 kubelet[2738]: I0424 00:35:04.202495 2738 kubelet.go:394] "Adding apiserver pod source" Apr 24 00:35:04.204401 kubelet[2738]: I0424 00:35:04.202504 2738 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:35:04.208166 kubelet[2738]: I0424 00:35:04.208144 2738 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:35:04.210107 kubelet[2738]: I0424 00:35:04.210086 2738 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:35:04.210261 kubelet[2738]: I0424 00:35:04.210235 2738 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 24 00:35:04.216751 kubelet[2738]: I0424 00:35:04.216708 2738 server.go:1257] "Started kubelet" Apr 24 00:35:04.225725 kubelet[2738]: I0424 00:35:04.225189 2738 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:35:04.225725 kubelet[2738]: I0424 00:35:04.225264 2738 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 24 00:35:04.225725 kubelet[2738]: I0424 00:35:04.225514 2738 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:35:04.225725 kubelet[2738]: I0424 00:35:04.225557 2738 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:35:04.226525 kubelet[2738]: I0424 00:35:04.226345 2738 server.go:317] "Adding debug handlers to kubelet server" Apr 24 00:35:04.228837 kubelet[2738]: I0424 00:35:04.228106 2738 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 24 00:35:04.233192 kubelet[2738]: I0424 00:35:04.233162 2738 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:35:04.233820 kubelet[2738]: I0424 00:35:04.233787 2738 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 24 00:35:04.237919 kubelet[2738]: I0424 00:35:04.237880 2738 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 24 00:35:04.239239 kubelet[2738]: I0424 00:35:04.239215 2738 reconciler.go:29] "Reconciler: start to sync state" Apr 24 00:35:04.240915 kubelet[2738]: E0424 00:35:04.240892 2738 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:35:04.243017 kubelet[2738]: I0424 00:35:04.242997 2738 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:35:04.243103 kubelet[2738]: I0424 00:35:04.243083 2738 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:35:04.246978 kubelet[2738]: I0424 00:35:04.246960 2738 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:35:04.266760 kubelet[2738]: I0424 00:35:04.266707 2738 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 24 00:35:04.268961 kubelet[2738]: I0424 00:35:04.268939 2738 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 24 00:35:04.269079 kubelet[2738]: I0424 00:35:04.268971 2738 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 24 00:35:04.269079 kubelet[2738]: I0424 00:35:04.268993 2738 kubelet.go:2501] "Starting kubelet main sync loop" Apr 24 00:35:04.269079 kubelet[2738]: E0424 00:35:04.269059 2738 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:35:04.316158 kubelet[2738]: I0424 00:35:04.316124 2738 cpu_manager.go:225] "Starting" policy="none" Apr 24 00:35:04.316158 kubelet[2738]: I0424 00:35:04.316148 2738 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 24 00:35:04.316556 kubelet[2738]: I0424 00:35:04.316172 2738 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 24 00:35:04.316605 kubelet[2738]: I0424 00:35:04.316555 2738 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 24 00:35:04.316641 kubelet[2738]: I0424 00:35:04.316580 2738 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 24 00:35:04.316641 kubelet[2738]: I0424 00:35:04.316625 2738 policy_none.go:50] "Start" Apr 24 00:35:04.316641 kubelet[2738]: I0424 00:35:04.316636 2738 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 24 00:35:04.316731 kubelet[2738]: I0424 00:35:04.316651 2738 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 24 00:35:04.318079 kubelet[2738]: I0424 00:35:04.316787 2738 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 24 00:35:04.318079 kubelet[2738]: I0424 00:35:04.316804 2738 policy_none.go:44] "Start" Apr 24 00:35:04.322629 kubelet[2738]: E0424 00:35:04.322597 2738 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:35:04.324009 kubelet[2738]: I0424 00:35:04.322767 2738 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 24 00:35:04.324009 kubelet[2738]: I0424 00:35:04.322780 2738 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:35:04.324202 kubelet[2738]: I0424 00:35:04.324177 2738 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 24 00:35:04.326317 kubelet[2738]: E0424 00:35:04.325472 2738 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:35:04.371274 kubelet[2738]: I0424 00:35:04.370004 2738 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:04.371274 kubelet[2738]: I0424 00:35:04.370096 2738 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-108-90" Apr 24 00:35:04.371274 kubelet[2738]: I0424 00:35:04.370029 2738 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.377844 kubelet[2738]: E0424 00:35:04.377819 2738 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-108-90\" already exists" pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.379561 kubelet[2738]: E0424 00:35:04.379533 2738 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-108-90\" already exists" pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:04.379996 kubelet[2738]: E0424 00:35:04.379976 2738 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-108-90\" already exists" pod="kube-system/kube-scheduler-172-236-108-90" Apr 24 00:35:04.432143 kubelet[2738]: I0424 00:35:04.432096 2738 kubelet_node_status.go:74] "Attempting to register node" node="172-236-108-90" Apr 24 00:35:04.439460 kubelet[2738]: I0424 00:35:04.439415 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6f3e1e2ac9b4396d656fb945d7fc20d-kubeconfig\") pod \"kube-scheduler-172-236-108-90\" (UID: \"a6f3e1e2ac9b4396d656fb945d7fc20d\") " pod="kube-system/kube-scheduler-172-236-108-90" Apr 24 00:35:04.439460 kubelet[2738]: I0424 00:35:04.439449 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12f124768468448e1d615f8e9fba6c2e-k8s-certs\") pod \"kube-apiserver-172-236-108-90\" (UID: \"12f124768468448e1d615f8e9fba6c2e\") " pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:04.440378 kubelet[2738]: I0424 00:35:04.440350 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12f124768468448e1d615f8e9fba6c2e-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-108-90\" (UID: \"12f124768468448e1d615f8e9fba6c2e\") " pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:04.440462 kubelet[2738]: I0424 00:35:04.440431 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.440462 kubelet[2738]: I0424 00:35:04.440458 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12f124768468448e1d615f8e9fba6c2e-ca-certs\") pod \"kube-apiserver-172-236-108-90\" (UID: \"12f124768468448e1d615f8e9fba6c2e\") " pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:04.440558 kubelet[2738]: I0424 00:35:04.440473 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-ca-certs\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.440558 kubelet[2738]: I0424 00:35:04.440489 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-flexvolume-dir\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.440558 kubelet[2738]: I0424 00:35:04.440501 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-k8s-certs\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.440558 kubelet[2738]: I0424 00:35:04.440513 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/535dfd9d383a7e03397eedd720198c11-kubeconfig\") pod \"kube-controller-manager-172-236-108-90\" (UID: \"535dfd9d383a7e03397eedd720198c11\") " pod="kube-system/kube-controller-manager-172-236-108-90" Apr 24 00:35:04.441997 kubelet[2738]: I0424 00:35:04.441973 2738 kubelet_node_status.go:123] "Node was previously registered" node="172-236-108-90" Apr 24 00:35:04.442067 kubelet[2738]: I0424 00:35:04.442060 2738 kubelet_node_status.go:77] "Successfully registered node" node="172-236-108-90" Apr 24 00:35:04.681791 kubelet[2738]: E0424 00:35:04.680329 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:04.681791 kubelet[2738]: E0424 00:35:04.680356 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:04.681791 kubelet[2738]: E0424 00:35:04.680496 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:05.211379 kubelet[2738]: I0424 00:35:05.211324 2738 apiserver.go:52] "Watching apiserver" Apr 24 00:35:05.301167 kubelet[2738]: I0424 00:35:05.300039 2738 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:05.301167 kubelet[2738]: E0424 00:35:05.300202 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:05.301167 kubelet[2738]: E0424 00:35:05.301068 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:05.307609 kubelet[2738]: E0424 00:35:05.307573 2738 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-108-90\" already exists" pod="kube-system/kube-apiserver-172-236-108-90" Apr 24 00:35:05.308078 kubelet[2738]: E0424 00:35:05.308060 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:05.339059 kubelet[2738]: I0424 00:35:05.339009 2738 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 24 00:35:05.343046 kubelet[2738]: I0424 00:35:05.342984 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-108-90" podStartSLOduration=3.342970558 podStartE2EDuration="3.342970558s" podCreationTimestamp="2026-04-24 00:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:35:05.3355312 +0000 UTC m=+1.204706625" watchObservedRunningTime="2026-04-24 00:35:05.342970558 +0000 UTC m=+1.212145963" Apr 24 00:35:05.343698 kubelet[2738]: I0424 00:35:05.343645 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-108-90" podStartSLOduration=3.343636449 podStartE2EDuration="3.343636449s" podCreationTimestamp="2026-04-24 00:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:35:05.342929778 +0000 UTC m=+1.212105183" watchObservedRunningTime="2026-04-24 00:35:05.343636449 +0000 UTC m=+1.212811884" Apr 24 00:35:05.350332 kubelet[2738]: I0424 00:35:05.350262 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-108-90" podStartSLOduration=3.350249365 podStartE2EDuration="3.350249365s" podCreationTimestamp="2026-04-24 00:35:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:35:05.350082935 +0000 UTC m=+1.219258340" watchObservedRunningTime="2026-04-24 00:35:05.350249365 +0000 UTC m=+1.219424770" Apr 24 00:35:06.207311 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 00:35:06.302763 kubelet[2738]: E0424 00:35:06.302158 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:06.302763 kubelet[2738]: E0424 00:35:06.302250 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:06.517028 kubelet[2738]: E0424 00:35:06.516811 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:07.306213 kubelet[2738]: E0424 00:35:07.303311 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:07.306766 kubelet[2738]: E0424 00:35:07.306743 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:09.842874 kubelet[2738]: I0424 00:35:09.842844 2738 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 00:35:09.843352 containerd[1559]: time="2026-04-24T00:35:09.843259150Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 00:35:09.843622 kubelet[2738]: I0424 00:35:09.843421 2738 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 00:35:10.448784 systemd[1]: Created slice kubepods-besteffort-pod544847e4_7a72_4f0c_94a7_87533880ad85.slice - libcontainer container kubepods-besteffort-pod544847e4_7a72_4f0c_94a7_87533880ad85.slice. Apr 24 00:35:10.477114 kubelet[2738]: I0424 00:35:10.477062 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/544847e4-7a72-4f0c-94a7-87533880ad85-lib-modules\") pod \"kube-proxy-jklbf\" (UID: \"544847e4-7a72-4f0c-94a7-87533880ad85\") " pod="kube-system/kube-proxy-jklbf" Apr 24 00:35:10.477114 kubelet[2738]: I0424 00:35:10.477114 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/544847e4-7a72-4f0c-94a7-87533880ad85-kube-proxy\") pod \"kube-proxy-jklbf\" (UID: \"544847e4-7a72-4f0c-94a7-87533880ad85\") " pod="kube-system/kube-proxy-jklbf" Apr 24 00:35:10.477272 kubelet[2738]: I0424 00:35:10.477143 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/544847e4-7a72-4f0c-94a7-87533880ad85-xtables-lock\") pod \"kube-proxy-jklbf\" (UID: \"544847e4-7a72-4f0c-94a7-87533880ad85\") " pod="kube-system/kube-proxy-jklbf" Apr 24 00:35:10.477272 kubelet[2738]: I0424 00:35:10.477169 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfhzx\" (UniqueName: \"kubernetes.io/projected/544847e4-7a72-4f0c-94a7-87533880ad85-kube-api-access-pfhzx\") pod \"kube-proxy-jklbf\" (UID: \"544847e4-7a72-4f0c-94a7-87533880ad85\") " pod="kube-system/kube-proxy-jklbf" Apr 24 00:35:10.583603 kubelet[2738]: E0424 00:35:10.583566 2738 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 24 00:35:10.583603 kubelet[2738]: E0424 00:35:10.583594 2738 projected.go:196] Error preparing data for projected volume kube-api-access-pfhzx for pod kube-system/kube-proxy-jklbf: configmap "kube-root-ca.crt" not found Apr 24 00:35:10.583994 kubelet[2738]: E0424 00:35:10.583658 2738 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/544847e4-7a72-4f0c-94a7-87533880ad85-kube-api-access-pfhzx podName:544847e4-7a72-4f0c-94a7-87533880ad85 nodeName:}" failed. No retries permitted until 2026-04-24 00:35:11.083637459 +0000 UTC m=+6.952812864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pfhzx" (UniqueName: "kubernetes.io/projected/544847e4-7a72-4f0c-94a7-87533880ad85-kube-api-access-pfhzx") pod "kube-proxy-jklbf" (UID: "544847e4-7a72-4f0c-94a7-87533880ad85") : configmap "kube-root-ca.crt" not found Apr 24 00:35:11.117367 systemd[1]: Created slice kubepods-besteffort-pod00a98f84_8e3a_4440_806d_c3fe6bc539d9.slice - libcontainer container kubepods-besteffort-pod00a98f84_8e3a_4440_806d_c3fe6bc539d9.slice. Apr 24 00:35:11.181541 kubelet[2738]: I0424 00:35:11.181476 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbhr\" (UniqueName: \"kubernetes.io/projected/00a98f84-8e3a-4440-806d-c3fe6bc539d9-kube-api-access-5qbhr\") pod \"tigera-operator-687949b757-8c5z4\" (UID: \"00a98f84-8e3a-4440-806d-c3fe6bc539d9\") " pod="tigera-operator/tigera-operator-687949b757-8c5z4" Apr 24 00:35:11.181541 kubelet[2738]: I0424 00:35:11.181550 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/00a98f84-8e3a-4440-806d-c3fe6bc539d9-var-lib-calico\") pod \"tigera-operator-687949b757-8c5z4\" (UID: \"00a98f84-8e3a-4440-806d-c3fe6bc539d9\") " pod="tigera-operator/tigera-operator-687949b757-8c5z4" Apr 24 00:35:11.363410 kubelet[2738]: E0424 00:35:11.362636 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:11.364318 containerd[1559]: time="2026-04-24T00:35:11.364254574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jklbf,Uid:544847e4-7a72-4f0c-94a7-87533880ad85,Namespace:kube-system,Attempt:0,}" Apr 24 00:35:11.385109 containerd[1559]: time="2026-04-24T00:35:11.384853405Z" level=info msg="connecting to shim 690cde4c0986d81f8d9a0df3cd5c4d899489b2bb25fed142264b2f324ebb2a38" address="unix:///run/containerd/s/3ad5347c4e211ca806be68721b74f0193b6b1ce4e8730d739adb35f795d0dba4" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:11.419557 systemd[1]: Started cri-containerd-690cde4c0986d81f8d9a0df3cd5c4d899489b2bb25fed142264b2f324ebb2a38.scope - libcontainer container 690cde4c0986d81f8d9a0df3cd5c4d899489b2bb25fed142264b2f324ebb2a38. Apr 24 00:35:11.424993 containerd[1559]: time="2026-04-24T00:35:11.424936374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-8c5z4,Uid:00a98f84-8e3a-4440-806d-c3fe6bc539d9,Namespace:tigera-operator,Attempt:0,}" Apr 24 00:35:11.441625 containerd[1559]: time="2026-04-24T00:35:11.441584338Z" level=info msg="connecting to shim 90062b7df85eeb8f99d6d477ca237ea57d9e220d89f91d232d7110cba28718d5" address="unix:///run/containerd/s/163495a64c027cba7c69dab04ecd2ade88b0ed4dab889848fb1022ffec1d52e7" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:11.479503 systemd[1]: Started cri-containerd-90062b7df85eeb8f99d6d477ca237ea57d9e220d89f91d232d7110cba28718d5.scope - libcontainer container 90062b7df85eeb8f99d6d477ca237ea57d9e220d89f91d232d7110cba28718d5. Apr 24 00:35:11.485762 containerd[1559]: time="2026-04-24T00:35:11.485652242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jklbf,Uid:544847e4-7a72-4f0c-94a7-87533880ad85,Namespace:kube-system,Attempt:0,} returns sandbox id \"690cde4c0986d81f8d9a0df3cd5c4d899489b2bb25fed142264b2f324ebb2a38\"" Apr 24 00:35:11.487394 kubelet[2738]: E0424 00:35:11.486887 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:11.495652 containerd[1559]: time="2026-04-24T00:35:11.495625771Z" level=info msg="CreateContainer within sandbox \"690cde4c0986d81f8d9a0df3cd5c4d899489b2bb25fed142264b2f324ebb2a38\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 00:35:11.505699 containerd[1559]: time="2026-04-24T00:35:11.505674447Z" level=info msg="Container d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:11.514410 containerd[1559]: time="2026-04-24T00:35:11.514372819Z" level=info msg="CreateContainer within sandbox \"690cde4c0986d81f8d9a0df3cd5c4d899489b2bb25fed142264b2f324ebb2a38\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7\"" Apr 24 00:35:11.515680 containerd[1559]: time="2026-04-24T00:35:11.515252813Z" level=info msg="StartContainer for \"d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7\"" Apr 24 00:35:11.526479 containerd[1559]: time="2026-04-24T00:35:11.526449082Z" level=info msg="connecting to shim d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7" address="unix:///run/containerd/s/3ad5347c4e211ca806be68721b74f0193b6b1ce4e8730d739adb35f795d0dba4" protocol=ttrpc version=3 Apr 24 00:35:11.559531 systemd[1]: Started cri-containerd-d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7.scope - libcontainer container d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7. Apr 24 00:35:11.566325 containerd[1559]: time="2026-04-24T00:35:11.566223453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-8c5z4,Uid:00a98f84-8e3a-4440-806d-c3fe6bc539d9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"90062b7df85eeb8f99d6d477ca237ea57d9e220d89f91d232d7110cba28718d5\"" Apr 24 00:35:11.571015 containerd[1559]: time="2026-04-24T00:35:11.570974818Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 24 00:35:11.655488 containerd[1559]: time="2026-04-24T00:35:11.653119673Z" level=info msg="StartContainer for \"d8b270b045b4f50bd7134dac2efbc0a6d856a35b030561706b8924c918a999b7\" returns successfully" Apr 24 00:35:12.308552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835494451.mount: Deactivated successfully. Apr 24 00:35:12.316760 kubelet[2738]: E0424 00:35:12.316719 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:12.331858 kubelet[2738]: I0424 00:35:12.331687 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-jklbf" podStartSLOduration=2.331607731 podStartE2EDuration="2.331607731s" podCreationTimestamp="2026-04-24 00:35:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:35:12.33032686 +0000 UTC m=+8.199502285" watchObservedRunningTime="2026-04-24 00:35:12.331607731 +0000 UTC m=+8.200783136" Apr 24 00:35:13.164725 containerd[1559]: time="2026-04-24T00:35:13.164671720Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:13.165682 containerd[1559]: time="2026-04-24T00:35:13.165504110Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 24 00:35:13.166461 containerd[1559]: time="2026-04-24T00:35:13.166430567Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:13.168249 containerd[1559]: time="2026-04-24T00:35:13.168218672Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:13.168830 containerd[1559]: time="2026-04-24T00:35:13.168799521Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 1.597562364s" Apr 24 00:35:13.168895 containerd[1559]: time="2026-04-24T00:35:13.168881678Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 24 00:35:13.174180 containerd[1559]: time="2026-04-24T00:35:13.174150108Z" level=info msg="CreateContainer within sandbox \"90062b7df85eeb8f99d6d477ca237ea57d9e220d89f91d232d7110cba28718d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 00:35:13.179325 containerd[1559]: time="2026-04-24T00:35:13.179239055Z" level=info msg="Container 4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:13.185060 containerd[1559]: time="2026-04-24T00:35:13.185016385Z" level=info msg="CreateContainer within sandbox \"90062b7df85eeb8f99d6d477ca237ea57d9e220d89f91d232d7110cba28718d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72\"" Apr 24 00:35:13.185946 containerd[1559]: time="2026-04-24T00:35:13.185912243Z" level=info msg="StartContainer for \"4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72\"" Apr 24 00:35:13.187111 containerd[1559]: time="2026-04-24T00:35:13.187084671Z" level=info msg="connecting to shim 4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72" address="unix:///run/containerd/s/163495a64c027cba7c69dab04ecd2ade88b0ed4dab889848fb1022ffec1d52e7" protocol=ttrpc version=3 Apr 24 00:35:13.213434 systemd[1]: Started cri-containerd-4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72.scope - libcontainer container 4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72. Apr 24 00:35:13.249489 containerd[1559]: time="2026-04-24T00:35:13.249428011Z" level=info msg="StartContainer for \"4c9e6bd3ac761a64cf7e4a2bde54002b02246e5feec91939784f536908695c72\" returns successfully" Apr 24 00:35:13.330932 kubelet[2738]: I0424 00:35:13.330080 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-687949b757-8c5z4" podStartSLOduration=0.728862552 podStartE2EDuration="2.330066701s" podCreationTimestamp="2026-04-24 00:35:11 +0000 UTC" firstStartedPulling="2026-04-24 00:35:11.568549997 +0000 UTC m=+7.437725402" lastFinishedPulling="2026-04-24 00:35:13.169754147 +0000 UTC m=+9.038929551" observedRunningTime="2026-04-24 00:35:13.329974514 +0000 UTC m=+9.199149919" watchObservedRunningTime="2026-04-24 00:35:13.330066701 +0000 UTC m=+9.199242106" Apr 24 00:35:15.479115 kubelet[2738]: E0424 00:35:15.478767 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:16.523305 kubelet[2738]: E0424 00:35:16.523211 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:17.283549 kubelet[2738]: E0424 00:35:17.283514 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:18.768509 sudo[1796]: pam_unix(sudo:session): session closed for user root Apr 24 00:35:18.866821 sshd[1795]: Connection closed by 20.229.252.112 port 52588 Apr 24 00:35:18.869572 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Apr 24 00:35:18.873875 systemd[1]: sshd@6-172.236.108.90:22-20.229.252.112:52588.service: Deactivated successfully. Apr 24 00:35:18.878061 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 00:35:18.878515 systemd[1]: session-7.scope: Consumed 3.443s CPU time, 228.7M memory peak. Apr 24 00:35:18.886653 systemd-logind[1530]: Session 7 logged out. Waiting for processes to exit. Apr 24 00:35:18.889458 systemd-logind[1530]: Removed session 7. Apr 24 00:35:20.358835 update_engine[1531]: I20260424 00:35:20.358775 1531 update_attempter.cc:509] Updating boot flags... Apr 24 00:35:21.587480 systemd[1]: Created slice kubepods-besteffort-podc2cdf811_e3e7_4327_ab97_9bffea704463.slice - libcontainer container kubepods-besteffort-podc2cdf811_e3e7_4327_ab97_9bffea704463.slice. Apr 24 00:35:21.665120 systemd[1]: Created slice kubepods-besteffort-podf7bc6ef2_82d3_4503_8811_638504cdaa6b.slice - libcontainer container kubepods-besteffort-podf7bc6ef2_82d3_4503_8811_638504cdaa6b.slice. Apr 24 00:35:21.741560 kubelet[2738]: I0424 00:35:21.741528 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c2cdf811-e3e7-4327-ab97-9bffea704463-typha-certs\") pod \"calico-typha-5f7d777d8d-spgvv\" (UID: \"c2cdf811-e3e7-4327-ab97-9bffea704463\") " pod="calico-system/calico-typha-5f7d777d8d-spgvv" Apr 24 00:35:21.742036 kubelet[2738]: I0424 00:35:21.741993 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlstk\" (UniqueName: \"kubernetes.io/projected/c2cdf811-e3e7-4327-ab97-9bffea704463-kube-api-access-hlstk\") pod \"calico-typha-5f7d777d8d-spgvv\" (UID: \"c2cdf811-e3e7-4327-ab97-9bffea704463\") " pod="calico-system/calico-typha-5f7d777d8d-spgvv" Apr 24 00:35:21.742301 kubelet[2738]: I0424 00:35:21.742201 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2cdf811-e3e7-4327-ab97-9bffea704463-tigera-ca-bundle\") pod \"calico-typha-5f7d777d8d-spgvv\" (UID: \"c2cdf811-e3e7-4327-ab97-9bffea704463\") " pod="calico-system/calico-typha-5f7d777d8d-spgvv" Apr 24 00:35:21.756776 kubelet[2738]: E0424 00:35:21.756694 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:21.843090 kubelet[2738]: I0424 00:35:21.842654 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-cni-net-dir\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843090 kubelet[2738]: I0424 00:35:21.842685 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-flexvol-driver-host\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843090 kubelet[2738]: I0424 00:35:21.842703 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-lib-modules\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843090 kubelet[2738]: I0424 00:35:21.842717 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-xtables-lock\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843090 kubelet[2738]: I0424 00:35:21.842774 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/bb05f61e-d422-416a-8c42-0363cb92c2dc-registration-dir\") pod \"csi-node-driver-tbgbk\" (UID: \"bb05f61e-d422-416a-8c42-0363cb92c2dc\") " pod="calico-system/csi-node-driver-tbgbk" Apr 24 00:35:21.843392 kubelet[2738]: I0424 00:35:21.842814 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-cni-bin-dir\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843392 kubelet[2738]: I0424 00:35:21.842830 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f7bc6ef2-82d3-4503-8811-638504cdaa6b-node-certs\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843392 kubelet[2738]: I0424 00:35:21.842844 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-var-run-calico\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843392 kubelet[2738]: I0424 00:35:21.842859 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bb05f61e-d422-416a-8c42-0363cb92c2dc-kubelet-dir\") pod \"csi-node-driver-tbgbk\" (UID: \"bb05f61e-d422-416a-8c42-0363cb92c2dc\") " pod="calico-system/csi-node-driver-tbgbk" Apr 24 00:35:21.843392 kubelet[2738]: I0424 00:35:21.842872 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/bb05f61e-d422-416a-8c42-0363cb92c2dc-socket-dir\") pod \"csi-node-driver-tbgbk\" (UID: \"bb05f61e-d422-416a-8c42-0363cb92c2dc\") " pod="calico-system/csi-node-driver-tbgbk" Apr 24 00:35:21.843529 kubelet[2738]: I0424 00:35:21.842899 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7bc6ef2-82d3-4503-8811-638504cdaa6b-tigera-ca-bundle\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843529 kubelet[2738]: I0424 00:35:21.842914 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-cni-log-dir\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843529 kubelet[2738]: I0424 00:35:21.842942 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t684\" (UniqueName: \"kubernetes.io/projected/f7bc6ef2-82d3-4503-8811-638504cdaa6b-kube-api-access-9t684\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843529 kubelet[2738]: I0424 00:35:21.842956 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6lnn\" (UniqueName: \"kubernetes.io/projected/bb05f61e-d422-416a-8c42-0363cb92c2dc-kube-api-access-q6lnn\") pod \"csi-node-driver-tbgbk\" (UID: \"bb05f61e-d422-416a-8c42-0363cb92c2dc\") " pod="calico-system/csi-node-driver-tbgbk" Apr 24 00:35:21.843529 kubelet[2738]: I0424 00:35:21.842980 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-nodeproc\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843649 kubelet[2738]: I0424 00:35:21.842993 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-policysync\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843649 kubelet[2738]: I0424 00:35:21.843005 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/bb05f61e-d422-416a-8c42-0363cb92c2dc-varrun\") pod \"csi-node-driver-tbgbk\" (UID: \"bb05f61e-d422-416a-8c42-0363cb92c2dc\") " pod="calico-system/csi-node-driver-tbgbk" Apr 24 00:35:21.843649 kubelet[2738]: I0424 00:35:21.843035 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-sys-fs\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843649 kubelet[2738]: I0424 00:35:21.843047 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-var-lib-calico\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.843649 kubelet[2738]: I0424 00:35:21.843061 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/f7bc6ef2-82d3-4503-8811-638504cdaa6b-bpffs\") pod \"calico-node-tk88k\" (UID: \"f7bc6ef2-82d3-4503-8811-638504cdaa6b\") " pod="calico-system/calico-node-tk88k" Apr 24 00:35:21.896904 kubelet[2738]: E0424 00:35:21.896873 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:21.897549 containerd[1559]: time="2026-04-24T00:35:21.897516390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f7d777d8d-spgvv,Uid:c2cdf811-e3e7-4327-ab97-9bffea704463,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:21.917309 containerd[1559]: time="2026-04-24T00:35:21.916647836Z" level=info msg="connecting to shim 6c1935a01da88dd5a0e59de449ff66af4c2e0068e34b4b86c9549a99f7d23b24" address="unix:///run/containerd/s/af913338cbd218cdd5e9912f9b1628e61f6afe3dab18fa05743f4a88dcb86b69" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:21.962743 systemd[1]: Started cri-containerd-6c1935a01da88dd5a0e59de449ff66af4c2e0068e34b4b86c9549a99f7d23b24.scope - libcontainer container 6c1935a01da88dd5a0e59de449ff66af4c2e0068e34b4b86c9549a99f7d23b24. Apr 24 00:35:21.974185 kubelet[2738]: E0424 00:35:21.974138 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:21.974695 kubelet[2738]: W0424 00:35:21.974678 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:21.974836 kubelet[2738]: E0424 00:35:21.974803 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:21.977203 kubelet[2738]: E0424 00:35:21.977133 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:21.977203 kubelet[2738]: W0424 00:35:21.977148 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:21.977203 kubelet[2738]: E0424 00:35:21.977163 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:21.977953 kubelet[2738]: E0424 00:35:21.977942 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:21.978064 kubelet[2738]: W0424 00:35:21.978029 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:21.978064 kubelet[2738]: E0424 00:35:21.978044 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:22.026798 containerd[1559]: time="2026-04-24T00:35:22.026558148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f7d777d8d-spgvv,Uid:c2cdf811-e3e7-4327-ab97-9bffea704463,Namespace:calico-system,Attempt:0,} returns sandbox id \"6c1935a01da88dd5a0e59de449ff66af4c2e0068e34b4b86c9549a99f7d23b24\"" Apr 24 00:35:22.029316 kubelet[2738]: E0424 00:35:22.029249 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:22.037316 containerd[1559]: time="2026-04-24T00:35:22.037199838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 24 00:35:22.272678 containerd[1559]: time="2026-04-24T00:35:22.272634768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tk88k,Uid:f7bc6ef2-82d3-4503-8811-638504cdaa6b,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:22.286774 containerd[1559]: time="2026-04-24T00:35:22.286735240Z" level=info msg="connecting to shim 2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf" address="unix:///run/containerd/s/0e17527449e5697e503e86b6453cab416078f7d81948c0770af46eab2d069b89" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:22.308419 systemd[1]: Started cri-containerd-2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf.scope - libcontainer container 2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf. Apr 24 00:35:22.337435 containerd[1559]: time="2026-04-24T00:35:22.337392240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tk88k,Uid:f7bc6ef2-82d3-4503-8811-638504cdaa6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\"" Apr 24 00:35:22.978374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount646670237.mount: Deactivated successfully. Apr 24 00:35:23.271399 kubelet[2738]: E0424 00:35:23.270372 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:23.601615 containerd[1559]: time="2026-04-24T00:35:23.601308798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:23.602464 containerd[1559]: time="2026-04-24T00:35:23.602203880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=35813139" Apr 24 00:35:23.603818 containerd[1559]: time="2026-04-24T00:35:23.603334170Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:23.605548 containerd[1559]: time="2026-04-24T00:35:23.605519570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:23.606020 containerd[1559]: time="2026-04-24T00:35:23.605986191Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 1.568753403s" Apr 24 00:35:23.606020 containerd[1559]: time="2026-04-24T00:35:23.606018230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 24 00:35:23.610786 containerd[1559]: time="2026-04-24T00:35:23.609872959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 24 00:35:23.624604 containerd[1559]: time="2026-04-24T00:35:23.624565728Z" level=info msg="CreateContainer within sandbox \"6c1935a01da88dd5a0e59de449ff66af4c2e0068e34b4b86c9549a99f7d23b24\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 00:35:23.629493 containerd[1559]: time="2026-04-24T00:35:23.629438378Z" level=info msg="Container d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:23.635247 containerd[1559]: time="2026-04-24T00:35:23.635199511Z" level=info msg="CreateContainer within sandbox \"6c1935a01da88dd5a0e59de449ff66af4c2e0068e34b4b86c9549a99f7d23b24\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf\"" Apr 24 00:35:23.637563 containerd[1559]: time="2026-04-24T00:35:23.637513619Z" level=info msg="StartContainer for \"d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf\"" Apr 24 00:35:23.639475 containerd[1559]: time="2026-04-24T00:35:23.639448464Z" level=info msg="connecting to shim d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf" address="unix:///run/containerd/s/af913338cbd218cdd5e9912f9b1628e61f6afe3dab18fa05743f4a88dcb86b69" protocol=ttrpc version=3 Apr 24 00:35:23.661489 systemd[1]: Started cri-containerd-d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf.scope - libcontainer container d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf. Apr 24 00:35:23.721400 containerd[1559]: time="2026-04-24T00:35:23.721339543Z" level=info msg="StartContainer for \"d46d236d2c8a78da38267ef04acfa2a3be3e7bf5fa39601c4914ba56ed4f7baf\" returns successfully" Apr 24 00:35:24.272203 containerd[1559]: time="2026-04-24T00:35:24.272149228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:24.275456 containerd[1559]: time="2026-04-24T00:35:24.275429232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4601981" Apr 24 00:35:24.276474 containerd[1559]: time="2026-04-24T00:35:24.276427745Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:24.277975 containerd[1559]: time="2026-04-24T00:35:24.277934489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:24.278703 containerd[1559]: time="2026-04-24T00:35:24.278672576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 668.740838ms" Apr 24 00:35:24.278744 containerd[1559]: time="2026-04-24T00:35:24.278703496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 24 00:35:24.285736 containerd[1559]: time="2026-04-24T00:35:24.285703415Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 00:35:24.295435 containerd[1559]: time="2026-04-24T00:35:24.294636771Z" level=info msg="Container 5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:24.300021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728888410.mount: Deactivated successfully. Apr 24 00:35:24.304531 containerd[1559]: time="2026-04-24T00:35:24.304488501Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5\"" Apr 24 00:35:24.305206 containerd[1559]: time="2026-04-24T00:35:24.305172819Z" level=info msg="StartContainer for \"5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5\"" Apr 24 00:35:24.307602 containerd[1559]: time="2026-04-24T00:35:24.307579598Z" level=info msg="connecting to shim 5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5" address="unix:///run/containerd/s/0e17527449e5697e503e86b6453cab416078f7d81948c0770af46eab2d069b89" protocol=ttrpc version=3 Apr 24 00:35:24.333632 systemd[1]: Started cri-containerd-5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5.scope - libcontainer container 5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5. Apr 24 00:35:24.353236 kubelet[2738]: E0424 00:35:24.353085 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:24.356028 kubelet[2738]: E0424 00:35:24.355978 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.356028 kubelet[2738]: W0424 00:35:24.355995 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.356253 kubelet[2738]: E0424 00:35:24.356122 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.356648 kubelet[2738]: E0424 00:35:24.356637 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.356800 kubelet[2738]: W0424 00:35:24.356696 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.356800 kubelet[2738]: E0424 00:35:24.356711 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.357406 kubelet[2738]: E0424 00:35:24.357325 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.357406 kubelet[2738]: W0424 00:35:24.357337 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.357406 kubelet[2738]: E0424 00:35:24.357348 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.357886 kubelet[2738]: E0424 00:35:24.357794 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.357886 kubelet[2738]: W0424 00:35:24.357805 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.357886 kubelet[2738]: E0424 00:35:24.357826 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.358480 kubelet[2738]: E0424 00:35:24.358432 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.358480 kubelet[2738]: W0424 00:35:24.358442 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.358480 kubelet[2738]: E0424 00:35:24.358451 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.359208 kubelet[2738]: E0424 00:35:24.358895 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.359208 kubelet[2738]: W0424 00:35:24.358939 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.359208 kubelet[2738]: E0424 00:35:24.359152 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.359589 kubelet[2738]: E0424 00:35:24.359578 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.359670 kubelet[2738]: W0424 00:35:24.359659 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.359737 kubelet[2738]: E0424 00:35:24.359707 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.359997 kubelet[2738]: E0424 00:35:24.359986 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.360090 kubelet[2738]: W0424 00:35:24.360076 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.360244 kubelet[2738]: E0424 00:35:24.360124 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.360642 kubelet[2738]: E0424 00:35:24.360621 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.360731 kubelet[2738]: W0424 00:35:24.360697 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.360844 kubelet[2738]: E0424 00:35:24.360776 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.361198 kubelet[2738]: E0424 00:35:24.361116 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.361198 kubelet[2738]: W0424 00:35:24.361128 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.361198 kubelet[2738]: E0424 00:35:24.361138 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.361598 kubelet[2738]: E0424 00:35:24.361577 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.361734 kubelet[2738]: W0424 00:35:24.361652 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.361734 kubelet[2738]: E0424 00:35:24.361691 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.362065 kubelet[2738]: E0424 00:35:24.362021 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.362065 kubelet[2738]: W0424 00:35:24.362033 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.362241 kubelet[2738]: E0424 00:35:24.362154 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.362533 kubelet[2738]: E0424 00:35:24.362521 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.362683 kubelet[2738]: W0424 00:35:24.362577 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.362683 kubelet[2738]: E0424 00:35:24.362588 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.363030 kubelet[2738]: E0424 00:35:24.363017 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.363371 kubelet[2738]: W0424 00:35:24.363355 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.363429 kubelet[2738]: E0424 00:35:24.363417 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.363865 kubelet[2738]: E0424 00:35:24.363849 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.363865 kubelet[2738]: W0424 00:35:24.363864 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.364098 kubelet[2738]: E0424 00:35:24.363877 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.364633 kubelet[2738]: E0424 00:35:24.364618 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.364633 kubelet[2738]: W0424 00:35:24.364630 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.364733 kubelet[2738]: E0424 00:35:24.364640 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.365146 kubelet[2738]: E0424 00:35:24.365096 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.365146 kubelet[2738]: W0424 00:35:24.365107 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.365146 kubelet[2738]: E0424 00:35:24.365116 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.365489 kubelet[2738]: E0424 00:35:24.365476 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.365489 kubelet[2738]: W0424 00:35:24.365487 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.365596 kubelet[2738]: E0424 00:35:24.365497 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.366707 kubelet[2738]: E0424 00:35:24.366357 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.366707 kubelet[2738]: W0424 00:35:24.366366 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.366707 kubelet[2738]: E0424 00:35:24.366375 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.367040 kubelet[2738]: E0424 00:35:24.366954 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.367040 kubelet[2738]: W0424 00:35:24.366967 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.367040 kubelet[2738]: E0424 00:35:24.366976 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.367429 kubelet[2738]: E0424 00:35:24.367161 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.367429 kubelet[2738]: W0424 00:35:24.367169 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.367429 kubelet[2738]: E0424 00:35:24.367177 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.367827 kubelet[2738]: E0424 00:35:24.367710 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.367827 kubelet[2738]: W0424 00:35:24.367724 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.367827 kubelet[2738]: E0424 00:35:24.367734 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.368205 kubelet[2738]: E0424 00:35:24.368118 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.368205 kubelet[2738]: W0424 00:35:24.368129 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.368205 kubelet[2738]: E0424 00:35:24.368137 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.368637 kubelet[2738]: I0424 00:35:24.368516 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-5f7d777d8d-spgvv" podStartSLOduration=1.797187893 podStartE2EDuration="3.368503658s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:22.035840264 +0000 UTC m=+17.905015669" lastFinishedPulling="2026-04-24 00:35:23.607156029 +0000 UTC m=+19.476331434" observedRunningTime="2026-04-24 00:35:24.36662358 +0000 UTC m=+20.235798995" watchObservedRunningTime="2026-04-24 00:35:24.368503658 +0000 UTC m=+20.237679063" Apr 24 00:35:24.369309 kubelet[2738]: E0424 00:35:24.368675 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.369394 kubelet[2738]: W0424 00:35:24.369378 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.369456 kubelet[2738]: E0424 00:35:24.369445 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.370454 kubelet[2738]: E0424 00:35:24.370431 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.370454 kubelet[2738]: W0424 00:35:24.370447 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.370826 kubelet[2738]: E0424 00:35:24.370456 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.370826 kubelet[2738]: E0424 00:35:24.370735 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.370826 kubelet[2738]: W0424 00:35:24.370744 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.370826 kubelet[2738]: E0424 00:35:24.370753 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.371554 kubelet[2738]: E0424 00:35:24.371492 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.371554 kubelet[2738]: W0424 00:35:24.371503 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.371554 kubelet[2738]: E0424 00:35:24.371513 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.371738 kubelet[2738]: E0424 00:35:24.371699 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.371738 kubelet[2738]: W0424 00:35:24.371709 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.371738 kubelet[2738]: E0424 00:35:24.371718 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.372066 kubelet[2738]: E0424 00:35:24.372048 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.372066 kubelet[2738]: W0424 00:35:24.372062 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.372206 kubelet[2738]: E0424 00:35:24.372071 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.372574 kubelet[2738]: E0424 00:35:24.372546 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.372574 kubelet[2738]: W0424 00:35:24.372559 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.372574 kubelet[2738]: E0424 00:35:24.372567 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.372934 kubelet[2738]: E0424 00:35:24.372920 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.372934 kubelet[2738]: W0424 00:35:24.372932 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.373176 kubelet[2738]: E0424 00:35:24.373129 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.373471 kubelet[2738]: E0424 00:35:24.373458 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.373471 kubelet[2738]: W0424 00:35:24.373469 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.373526 kubelet[2738]: E0424 00:35:24.373477 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.373717 kubelet[2738]: E0424 00:35:24.373704 2738 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:35:24.373717 kubelet[2738]: W0424 00:35:24.373715 2738 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:35:24.373765 kubelet[2738]: E0424 00:35:24.373723 2738 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:35:24.430482 containerd[1559]: time="2026-04-24T00:35:24.430413020Z" level=info msg="StartContainer for \"5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5\" returns successfully" Apr 24 00:35:24.460580 systemd[1]: cri-containerd-5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5.scope: Deactivated successfully. Apr 24 00:35:24.466638 containerd[1559]: time="2026-04-24T00:35:24.466552908Z" level=info msg="received container exit event container_id:\"5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5\" id:\"5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5\" pid:3331 exited_at:{seconds:1776990924 nanos:465789491}" Apr 24 00:35:24.505757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5dbaa317ef53ef8f2203ea7a9b693078bdc5e0aabab4d301b33a081c7351f4e5-rootfs.mount: Deactivated successfully. Apr 24 00:35:25.270512 kubelet[2738]: E0424 00:35:25.270436 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:25.365937 kubelet[2738]: I0424 00:35:25.365647 2738 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:35:25.366568 kubelet[2738]: E0424 00:35:25.366177 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:25.375326 containerd[1559]: time="2026-04-24T00:35:25.374178019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 24 00:35:27.269705 kubelet[2738]: E0424 00:35:27.269668 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:29.269750 kubelet[2738]: E0424 00:35:29.269707 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:29.535926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3968441764.mount: Deactivated successfully. Apr 24 00:35:29.569065 containerd[1559]: time="2026-04-24T00:35:29.569015904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:29.569934 containerd[1559]: time="2026-04-24T00:35:29.569831495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 24 00:35:29.570671 containerd[1559]: time="2026-04-24T00:35:29.570641975Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:29.572484 containerd[1559]: time="2026-04-24T00:35:29.572458362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:29.573124 containerd[1559]: time="2026-04-24T00:35:29.573101034Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 4.198871966s" Apr 24 00:35:29.573208 containerd[1559]: time="2026-04-24T00:35:29.573192783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 24 00:35:29.578421 containerd[1559]: time="2026-04-24T00:35:29.578398649Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 00:35:29.589316 containerd[1559]: time="2026-04-24T00:35:29.585773400Z" level=info msg="Container 7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:29.597231 containerd[1559]: time="2026-04-24T00:35:29.597191980Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4\"" Apr 24 00:35:29.598302 containerd[1559]: time="2026-04-24T00:35:29.597709104Z" level=info msg="StartContainer for \"7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4\"" Apr 24 00:35:29.599238 containerd[1559]: time="2026-04-24T00:35:29.599218415Z" level=info msg="connecting to shim 7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4" address="unix:///run/containerd/s/0e17527449e5697e503e86b6453cab416078f7d81948c0770af46eab2d069b89" protocol=ttrpc version=3 Apr 24 00:35:29.627485 systemd[1]: Started cri-containerd-7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4.scope - libcontainer container 7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4. Apr 24 00:35:29.709868 containerd[1559]: time="2026-04-24T00:35:29.709664648Z" level=info msg="StartContainer for \"7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4\" returns successfully" Apr 24 00:35:29.765401 systemd[1]: cri-containerd-7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4.scope: Deactivated successfully. Apr 24 00:35:29.766767 containerd[1559]: time="2026-04-24T00:35:29.766724261Z" level=info msg="received container exit event container_id:\"7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4\" id:\"7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4\" pid:3429 exited_at:{seconds:1776990929 nanos:766506373}" Apr 24 00:35:29.794969 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cfd59249618794b93682852f0a03156e8ca4f3e6af766e4f4c69e3b4a9b8dd4-rootfs.mount: Deactivated successfully. Apr 24 00:35:30.379276 containerd[1559]: time="2026-04-24T00:35:30.379147117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 24 00:35:31.270109 kubelet[2738]: E0424 00:35:31.269980 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:32.820887 containerd[1559]: time="2026-04-24T00:35:32.820824660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:32.822068 containerd[1559]: time="2026-04-24T00:35:32.821867439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 24 00:35:32.822733 containerd[1559]: time="2026-04-24T00:35:32.822702871Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:32.825451 containerd[1559]: time="2026-04-24T00:35:32.825422485Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:32.826728 containerd[1559]: time="2026-04-24T00:35:32.826317856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 2.447137289s" Apr 24 00:35:32.826728 containerd[1559]: time="2026-04-24T00:35:32.826343496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 24 00:35:32.830777 containerd[1559]: time="2026-04-24T00:35:32.830751132Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 00:35:32.841503 containerd[1559]: time="2026-04-24T00:35:32.840481166Z" level=info msg="Container dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:32.845360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923686049.mount: Deactivated successfully. Apr 24 00:35:32.858364 containerd[1559]: time="2026-04-24T00:35:32.858327880Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba\"" Apr 24 00:35:32.859068 containerd[1559]: time="2026-04-24T00:35:32.859036812Z" level=info msg="StartContainer for \"dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba\"" Apr 24 00:35:32.860640 containerd[1559]: time="2026-04-24T00:35:32.860619477Z" level=info msg="connecting to shim dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba" address="unix:///run/containerd/s/0e17527449e5697e503e86b6453cab416078f7d81948c0770af46eab2d069b89" protocol=ttrpc version=3 Apr 24 00:35:32.890418 systemd[1]: Started cri-containerd-dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba.scope - libcontainer container dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba. Apr 24 00:35:32.972408 containerd[1559]: time="2026-04-24T00:35:32.972012295Z" level=info msg="StartContainer for \"dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba\" returns successfully" Apr 24 00:35:33.270174 kubelet[2738]: E0424 00:35:33.270103 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tbgbk" podUID="bb05f61e-d422-416a-8c42-0363cb92c2dc" Apr 24 00:35:33.510120 containerd[1559]: time="2026-04-24T00:35:33.510067215Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:35:33.514167 systemd[1]: cri-containerd-dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba.scope: Deactivated successfully. Apr 24 00:35:33.514534 systemd[1]: cri-containerd-dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba.scope: Consumed 511ms CPU time, 193.5M memory peak, 556K read from disk, 173.7M written to disk. Apr 24 00:35:33.516883 containerd[1559]: time="2026-04-24T00:35:33.516771603Z" level=info msg="received container exit event container_id:\"dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba\" id:\"dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba\" pid:3488 exited_at:{seconds:1776990933 nanos:516403177}" Apr 24 00:35:33.568624 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd9de3b090256860d6e927c368ab5defa1e149fbfdc621c4e073b681925a4fba-rootfs.mount: Deactivated successfully. Apr 24 00:35:33.599229 kubelet[2738]: I0424 00:35:33.599184 2738 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 24 00:35:33.664788 systemd[1]: Created slice kubepods-burstable-pod256a07d0_0f83_4c59_8ae7_541bcd7973d3.slice - libcontainer container kubepods-burstable-pod256a07d0_0f83_4c59_8ae7_541bcd7973d3.slice. Apr 24 00:35:33.678007 systemd[1]: Created slice kubepods-besteffort-pod1b02a44c_0d48_44e4_9230_c06c8d011820.slice - libcontainer container kubepods-besteffort-pod1b02a44c_0d48_44e4_9230_c06c8d011820.slice. Apr 24 00:35:33.687587 systemd[1]: Created slice kubepods-besteffort-pod0d2d0b97_06b2_4b96_a4e9_02b2e0a0e416.slice - libcontainer container kubepods-besteffort-pod0d2d0b97_06b2_4b96_a4e9_02b2e0a0e416.slice. Apr 24 00:35:33.695535 systemd[1]: Created slice kubepods-burstable-pod4ef68e2d_3d67_4e31_854d_8266f70925bb.slice - libcontainer container kubepods-burstable-pod4ef68e2d_3d67_4e31_854d_8266f70925bb.slice. Apr 24 00:35:33.703663 systemd[1]: Created slice kubepods-besteffort-podb4175a70_02ad_4cf0_b71f_e891c587fabf.slice - libcontainer container kubepods-besteffort-podb4175a70_02ad_4cf0_b71f_e891c587fabf.slice. Apr 24 00:35:33.711498 systemd[1]: Created slice kubepods-besteffort-pod7bc7331d_2c65_432e_a66d_716f0351f0c4.slice - libcontainer container kubepods-besteffort-pod7bc7331d_2c65_432e_a66d_716f0351f0c4.slice. Apr 24 00:35:33.718782 systemd[1]: Created slice kubepods-besteffort-podaa5282ed_55a8_4e86_a3ba_35e1432c0a03.slice - libcontainer container kubepods-besteffort-podaa5282ed_55a8_4e86_a3ba_35e1432c0a03.slice. Apr 24 00:35:33.736599 kubelet[2738]: I0424 00:35:33.736262 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-nginx-config\") pod \"whisker-5bb4b68679-txqx8\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " pod="calico-system/whisker-5bb4b68679-txqx8" Apr 24 00:35:33.736874 kubelet[2738]: I0424 00:35:33.736846 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1b02a44c-0d48-44e4-9230-c06c8d011820-tigera-ca-bundle\") pod \"calico-kube-controllers-85cfb95b74-h2dr7\" (UID: \"1b02a44c-0d48-44e4-9230-c06c8d011820\") " pod="calico-system/calico-kube-controllers-85cfb95b74-h2dr7" Apr 24 00:35:33.737264 kubelet[2738]: I0424 00:35:33.737215 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b4175a70-02ad-4cf0-b71f-e891c587fabf-calico-apiserver-certs\") pod \"calico-apiserver-7bd6c8766-8k6qz\" (UID: \"b4175a70-02ad-4cf0-b71f-e891c587fabf\") " pod="calico-system/calico-apiserver-7bd6c8766-8k6qz" Apr 24 00:35:33.737672 kubelet[2738]: I0424 00:35:33.737270 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-backend-key-pair\") pod \"whisker-5bb4b68679-txqx8\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " pod="calico-system/whisker-5bb4b68679-txqx8" Apr 24 00:35:33.737672 kubelet[2738]: I0424 00:35:33.737350 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frpw\" (UniqueName: \"kubernetes.io/projected/1b02a44c-0d48-44e4-9230-c06c8d011820-kube-api-access-9frpw\") pod \"calico-kube-controllers-85cfb95b74-h2dr7\" (UID: \"1b02a44c-0d48-44e4-9230-c06c8d011820\") " pod="calico-system/calico-kube-controllers-85cfb95b74-h2dr7" Apr 24 00:35:33.737672 kubelet[2738]: I0424 00:35:33.737396 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7bc7331d-2c65-432e-a66d-716f0351f0c4-goldmane-key-pair\") pod \"goldmane-7fb6cdc5d9-gg2dd\" (UID: \"7bc7331d-2c65-432e-a66d-716f0351f0c4\") " pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" Apr 24 00:35:33.737672 kubelet[2738]: I0424 00:35:33.737421 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj4r4\" (UniqueName: \"kubernetes.io/projected/7bc7331d-2c65-432e-a66d-716f0351f0c4-kube-api-access-lj4r4\") pod \"goldmane-7fb6cdc5d9-gg2dd\" (UID: \"7bc7331d-2c65-432e-a66d-716f0351f0c4\") " pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" Apr 24 00:35:33.737672 kubelet[2738]: I0424 00:35:33.737443 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkkpz\" (UniqueName: \"kubernetes.io/projected/4ef68e2d-3d67-4e31-854d-8266f70925bb-kube-api-access-dkkpz\") pod \"coredns-7d764666f9-mm977\" (UID: \"4ef68e2d-3d67-4e31-854d-8266f70925bb\") " pod="kube-system/coredns-7d764666f9-mm977" Apr 24 00:35:33.737792 kubelet[2738]: I0424 00:35:33.737459 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/256a07d0-0f83-4c59-8ae7-541bcd7973d3-config-volume\") pod \"coredns-7d764666f9-qd789\" (UID: \"256a07d0-0f83-4c59-8ae7-541bcd7973d3\") " pod="kube-system/coredns-7d764666f9-qd789" Apr 24 00:35:33.737792 kubelet[2738]: I0424 00:35:33.737475 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrl8n\" (UniqueName: \"kubernetes.io/projected/b4175a70-02ad-4cf0-b71f-e891c587fabf-kube-api-access-jrl8n\") pod \"calico-apiserver-7bd6c8766-8k6qz\" (UID: \"b4175a70-02ad-4cf0-b71f-e891c587fabf\") " pod="calico-system/calico-apiserver-7bd6c8766-8k6qz" Apr 24 00:35:33.737792 kubelet[2738]: I0424 00:35:33.737488 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-ca-bundle\") pod \"whisker-5bb4b68679-txqx8\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " pod="calico-system/whisker-5bb4b68679-txqx8" Apr 24 00:35:33.737792 kubelet[2738]: I0424 00:35:33.737522 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q79lg\" (UniqueName: \"kubernetes.io/projected/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-kube-api-access-q79lg\") pod \"whisker-5bb4b68679-txqx8\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " pod="calico-system/whisker-5bb4b68679-txqx8" Apr 24 00:35:33.737792 kubelet[2738]: I0424 00:35:33.737577 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwdx8\" (UniqueName: \"kubernetes.io/projected/0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416-kube-api-access-pwdx8\") pod \"calico-apiserver-7bd6c8766-znzx7\" (UID: \"0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416\") " pod="calico-system/calico-apiserver-7bd6c8766-znzx7" Apr 24 00:35:33.737901 kubelet[2738]: I0424 00:35:33.737623 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qll9\" (UniqueName: \"kubernetes.io/projected/256a07d0-0f83-4c59-8ae7-541bcd7973d3-kube-api-access-4qll9\") pod \"coredns-7d764666f9-qd789\" (UID: \"256a07d0-0f83-4c59-8ae7-541bcd7973d3\") " pod="kube-system/coredns-7d764666f9-qd789" Apr 24 00:35:33.737901 kubelet[2738]: I0424 00:35:33.737691 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416-calico-apiserver-certs\") pod \"calico-apiserver-7bd6c8766-znzx7\" (UID: \"0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416\") " pod="calico-system/calico-apiserver-7bd6c8766-znzx7" Apr 24 00:35:33.737901 kubelet[2738]: I0424 00:35:33.737709 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7bc7331d-2c65-432e-a66d-716f0351f0c4-config\") pod \"goldmane-7fb6cdc5d9-gg2dd\" (UID: \"7bc7331d-2c65-432e-a66d-716f0351f0c4\") " pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" Apr 24 00:35:33.737901 kubelet[2738]: I0424 00:35:33.737742 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bc7331d-2c65-432e-a66d-716f0351f0c4-goldmane-ca-bundle\") pod \"goldmane-7fb6cdc5d9-gg2dd\" (UID: \"7bc7331d-2c65-432e-a66d-716f0351f0c4\") " pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" Apr 24 00:35:33.737901 kubelet[2738]: I0424 00:35:33.737757 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4ef68e2d-3d67-4e31-854d-8266f70925bb-config-volume\") pod \"coredns-7d764666f9-mm977\" (UID: \"4ef68e2d-3d67-4e31-854d-8266f70925bb\") " pod="kube-system/coredns-7d764666f9-mm977" Apr 24 00:35:33.975779 kubelet[2738]: E0424 00:35:33.975737 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:33.977476 containerd[1559]: time="2026-04-24T00:35:33.977436784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qd789,Uid:256a07d0-0f83-4c59-8ae7-541bcd7973d3,Namespace:kube-system,Attempt:0,}" Apr 24 00:35:33.983886 containerd[1559]: time="2026-04-24T00:35:33.983849486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cfb95b74-h2dr7,Uid:1b02a44c-0d48-44e4-9230-c06c8d011820,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:33.999354 containerd[1559]: time="2026-04-24T00:35:33.999327073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-znzx7,Uid:0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:34.000715 kubelet[2738]: E0424 00:35:34.000650 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:34.005802 containerd[1559]: time="2026-04-24T00:35:34.005754577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mm977,Uid:4ef68e2d-3d67-4e31-854d-8266f70925bb,Namespace:kube-system,Attempt:0,}" Apr 24 00:35:34.007797 containerd[1559]: time="2026-04-24T00:35:34.007737350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-8k6qz,Uid:b4175a70-02ad-4cf0-b71f-e891c587fabf,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:34.021314 containerd[1559]: time="2026-04-24T00:35:34.021242084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gg2dd,Uid:7bc7331d-2c65-432e-a66d-716f0351f0c4,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:34.024922 containerd[1559]: time="2026-04-24T00:35:34.024778904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb4b68679-txqx8,Uid:aa5282ed-55a8-4e86-a3ba-35e1432c0a03,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:34.168863 containerd[1559]: time="2026-04-24T00:35:34.168820611Z" level=error msg="Failed to destroy network for sandbox \"1d3cb257d2fcb23de855f1367d728b87e2a069b68d9e0b5d929ec291e25ac66a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.172874 containerd[1559]: time="2026-04-24T00:35:34.172765757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gg2dd,Uid:7bc7331d-2c65-432e-a66d-716f0351f0c4,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3cb257d2fcb23de855f1367d728b87e2a069b68d9e0b5d929ec291e25ac66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.173242 kubelet[2738]: E0424 00:35:34.173137 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3cb257d2fcb23de855f1367d728b87e2a069b68d9e0b5d929ec291e25ac66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.173242 kubelet[2738]: E0424 00:35:34.173209 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3cb257d2fcb23de855f1367d728b87e2a069b68d9e0b5d929ec291e25ac66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" Apr 24 00:35:34.173242 kubelet[2738]: E0424 00:35:34.173228 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d3cb257d2fcb23de855f1367d728b87e2a069b68d9e0b5d929ec291e25ac66a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" Apr 24 00:35:34.173734 kubelet[2738]: E0424 00:35:34.173510 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7fb6cdc5d9-gg2dd_calico-system(7bc7331d-2c65-432e-a66d-716f0351f0c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7fb6cdc5d9-gg2dd_calico-system(7bc7331d-2c65-432e-a66d-716f0351f0c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d3cb257d2fcb23de855f1367d728b87e2a069b68d9e0b5d929ec291e25ac66a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" podUID="7bc7331d-2c65-432e-a66d-716f0351f0c4" Apr 24 00:35:34.226470 containerd[1559]: time="2026-04-24T00:35:34.226272198Z" level=error msg="Failed to destroy network for sandbox \"c03ac4cf28b7ed84b696b2cbf98c7800d4270952862f0282bc242b7388b43aad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.228785 containerd[1559]: time="2026-04-24T00:35:34.228743008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cfb95b74-h2dr7,Uid:1b02a44c-0d48-44e4-9230-c06c8d011820,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03ac4cf28b7ed84b696b2cbf98c7800d4270952862f0282bc242b7388b43aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.229685 kubelet[2738]: E0424 00:35:34.229362 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03ac4cf28b7ed84b696b2cbf98c7800d4270952862f0282bc242b7388b43aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.229752 kubelet[2738]: E0424 00:35:34.229687 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03ac4cf28b7ed84b696b2cbf98c7800d4270952862f0282bc242b7388b43aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cfb95b74-h2dr7" Apr 24 00:35:34.229752 kubelet[2738]: E0424 00:35:34.229707 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c03ac4cf28b7ed84b696b2cbf98c7800d4270952862f0282bc242b7388b43aad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cfb95b74-h2dr7" Apr 24 00:35:34.229814 kubelet[2738]: E0424 00:35:34.229765 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cfb95b74-h2dr7_calico-system(1b02a44c-0d48-44e4-9230-c06c8d011820)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cfb95b74-h2dr7_calico-system(1b02a44c-0d48-44e4-9230-c06c8d011820)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c03ac4cf28b7ed84b696b2cbf98c7800d4270952862f0282bc242b7388b43aad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cfb95b74-h2dr7" podUID="1b02a44c-0d48-44e4-9230-c06c8d011820" Apr 24 00:35:34.231916 containerd[1559]: time="2026-04-24T00:35:34.231870231Z" level=error msg="Failed to destroy network for sandbox \"3fded270d470de1f438883da3e7cc887c214d0e0890716dedd938af074e7e95f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.232601 containerd[1559]: time="2026-04-24T00:35:34.232577725Z" level=error msg="Failed to destroy network for sandbox \"15d8cc7a646c73ca0be0c88f4f41d8f5c1f61c8a309eb23d66e7534033ca3f5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.234619 containerd[1559]: time="2026-04-24T00:35:34.234580228Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-znzx7,Uid:0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d8cc7a646c73ca0be0c88f4f41d8f5c1f61c8a309eb23d66e7534033ca3f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.234935 kubelet[2738]: E0424 00:35:34.234903 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d8cc7a646c73ca0be0c88f4f41d8f5c1f61c8a309eb23d66e7534033ca3f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.235168 kubelet[2738]: E0424 00:35:34.235052 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d8cc7a646c73ca0be0c88f4f41d8f5c1f61c8a309eb23d66e7534033ca3f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bd6c8766-znzx7" Apr 24 00:35:34.235168 kubelet[2738]: E0424 00:35:34.235102 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15d8cc7a646c73ca0be0c88f4f41d8f5c1f61c8a309eb23d66e7534033ca3f5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bd6c8766-znzx7" Apr 24 00:35:34.235427 kubelet[2738]: E0424 00:35:34.235143 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bd6c8766-znzx7_calico-system(0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bd6c8766-znzx7_calico-system(0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15d8cc7a646c73ca0be0c88f4f41d8f5c1f61c8a309eb23d66e7534033ca3f5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7bd6c8766-znzx7" podUID="0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416" Apr 24 00:35:34.235780 containerd[1559]: time="2026-04-24T00:35:34.235566340Z" level=error msg="Failed to destroy network for sandbox \"73b0f1e33b9cb66ed502277c6141db45018905aefc0eea7d56150b75133200de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.236912 containerd[1559]: time="2026-04-24T00:35:34.236882428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mm977,Uid:4ef68e2d-3d67-4e31-854d-8266f70925bb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fded270d470de1f438883da3e7cc887c214d0e0890716dedd938af074e7e95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.237351 kubelet[2738]: E0424 00:35:34.237062 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fded270d470de1f438883da3e7cc887c214d0e0890716dedd938af074e7e95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.237351 kubelet[2738]: E0424 00:35:34.237327 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fded270d470de1f438883da3e7cc887c214d0e0890716dedd938af074e7e95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mm977" Apr 24 00:35:34.237458 kubelet[2738]: E0424 00:35:34.237436 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fded270d470de1f438883da3e7cc887c214d0e0890716dedd938af074e7e95f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-mm977" Apr 24 00:35:34.237630 kubelet[2738]: E0424 00:35:34.237568 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-mm977_kube-system(4ef68e2d-3d67-4e31-854d-8266f70925bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-mm977_kube-system(4ef68e2d-3d67-4e31-854d-8266f70925bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fded270d470de1f438883da3e7cc887c214d0e0890716dedd938af074e7e95f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-mm977" podUID="4ef68e2d-3d67-4e31-854d-8266f70925bb" Apr 24 00:35:34.238653 containerd[1559]: time="2026-04-24T00:35:34.238620493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-8k6qz,Uid:b4175a70-02ad-4cf0-b71f-e891c587fabf,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b0f1e33b9cb66ed502277c6141db45018905aefc0eea7d56150b75133200de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.238794 kubelet[2738]: E0424 00:35:34.238757 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b0f1e33b9cb66ed502277c6141db45018905aefc0eea7d56150b75133200de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.238833 kubelet[2738]: E0424 00:35:34.238801 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b0f1e33b9cb66ed502277c6141db45018905aefc0eea7d56150b75133200de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bd6c8766-8k6qz" Apr 24 00:35:34.238833 kubelet[2738]: E0424 00:35:34.238822 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73b0f1e33b9cb66ed502277c6141db45018905aefc0eea7d56150b75133200de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7bd6c8766-8k6qz" Apr 24 00:35:34.238933 kubelet[2738]: E0424 00:35:34.238863 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bd6c8766-8k6qz_calico-system(b4175a70-02ad-4cf0-b71f-e891c587fabf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bd6c8766-8k6qz_calico-system(b4175a70-02ad-4cf0-b71f-e891c587fabf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73b0f1e33b9cb66ed502277c6141db45018905aefc0eea7d56150b75133200de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7bd6c8766-8k6qz" podUID="b4175a70-02ad-4cf0-b71f-e891c587fabf" Apr 24 00:35:34.247394 containerd[1559]: time="2026-04-24T00:35:34.247366838Z" level=error msg="Failed to destroy network for sandbox \"6d6bf7caaaf4fba1e46ea091340972f2a2a469f364bfa496978232eeeb6bab81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.248329 containerd[1559]: time="2026-04-24T00:35:34.248271320Z" level=error msg="Failed to destroy network for sandbox \"4885267af23f9658edba45cc13e2e967834bea72364ab5d9c9d926eae5850945\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.248576 containerd[1559]: time="2026-04-24T00:35:34.248438419Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qd789,Uid:256a07d0-0f83-4c59-8ae7-541bcd7973d3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bf7caaaf4fba1e46ea091340972f2a2a469f364bfa496978232eeeb6bab81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.248764 kubelet[2738]: E0424 00:35:34.248730 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bf7caaaf4fba1e46ea091340972f2a2a469f364bfa496978232eeeb6bab81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.248828 kubelet[2738]: E0424 00:35:34.248761 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bf7caaaf4fba1e46ea091340972f2a2a469f364bfa496978232eeeb6bab81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-qd789" Apr 24 00:35:34.248828 kubelet[2738]: E0424 00:35:34.248777 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d6bf7caaaf4fba1e46ea091340972f2a2a469f364bfa496978232eeeb6bab81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-qd789" Apr 24 00:35:34.248828 kubelet[2738]: E0424 00:35:34.248814 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-qd789_kube-system(256a07d0-0f83-4c59-8ae7-541bcd7973d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-qd789_kube-system(256a07d0-0f83-4c59-8ae7-541bcd7973d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d6bf7caaaf4fba1e46ea091340972f2a2a469f364bfa496978232eeeb6bab81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-qd789" podUID="256a07d0-0f83-4c59-8ae7-541bcd7973d3" Apr 24 00:35:34.249798 kubelet[2738]: E0424 00:35:34.249500 2738 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885267af23f9658edba45cc13e2e967834bea72364ab5d9c9d926eae5850945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.249798 kubelet[2738]: E0424 00:35:34.249536 2738 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885267af23f9658edba45cc13e2e967834bea72364ab5d9c9d926eae5850945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bb4b68679-txqx8" Apr 24 00:35:34.249798 kubelet[2738]: E0424 00:35:34.249599 2738 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885267af23f9658edba45cc13e2e967834bea72364ab5d9c9d926eae5850945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5bb4b68679-txqx8" Apr 24 00:35:34.249892 containerd[1559]: time="2026-04-24T00:35:34.249334831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bb4b68679-txqx8,Uid:aa5282ed-55a8-4e86-a3ba-35e1432c0a03,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4885267af23f9658edba45cc13e2e967834bea72364ab5d9c9d926eae5850945\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:35:34.249941 kubelet[2738]: E0424 00:35:34.249666 2738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5bb4b68679-txqx8_calico-system(aa5282ed-55a8-4e86-a3ba-35e1432c0a03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5bb4b68679-txqx8_calico-system(aa5282ed-55a8-4e86-a3ba-35e1432c0a03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4885267af23f9658edba45cc13e2e967834bea72364ab5d9c9d926eae5850945\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5bb4b68679-txqx8" podUID="aa5282ed-55a8-4e86-a3ba-35e1432c0a03" Apr 24 00:35:34.403342 containerd[1559]: time="2026-04-24T00:35:34.403276053Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 00:35:34.411773 containerd[1559]: time="2026-04-24T00:35:34.411706581Z" level=info msg="Container 71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:34.422693 containerd[1559]: time="2026-04-24T00:35:34.422653227Z" level=info msg="CreateContainer within sandbox \"2b53044c7c1e2e576b3fe28978ce90e3c15dbe6dff0a4a6abbd7973a5d0c23bf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00\"" Apr 24 00:35:34.423318 containerd[1559]: time="2026-04-24T00:35:34.423202172Z" level=info msg="StartContainer for \"71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00\"" Apr 24 00:35:34.425382 containerd[1559]: time="2026-04-24T00:35:34.425344533Z" level=info msg="connecting to shim 71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00" address="unix:///run/containerd/s/0e17527449e5697e503e86b6453cab416078f7d81948c0770af46eab2d069b89" protocol=ttrpc version=3 Apr 24 00:35:34.460451 systemd[1]: Started cri-containerd-71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00.scope - libcontainer container 71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00. Apr 24 00:35:34.549792 containerd[1559]: time="2026-04-24T00:35:34.549679899Z" level=info msg="StartContainer for \"71aecddedaac1cea66386318b59f0f039f9cab8f01f3a165b9768583dc88ee00\" returns successfully" Apr 24 00:35:34.745332 kubelet[2738]: I0424 00:35:34.744812 2738 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-ca-bundle\") pod \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " Apr 24 00:35:34.745332 kubelet[2738]: I0424 00:35:34.744885 2738 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-backend-key-pair\") pod \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " Apr 24 00:35:34.745332 kubelet[2738]: I0424 00:35:34.744919 2738 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-kube-api-access-q79lg\" (UniqueName: \"kubernetes.io/projected/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-kube-api-access-q79lg\") pod \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " Apr 24 00:35:34.745332 kubelet[2738]: I0424 00:35:34.744957 2738 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-nginx-config\" (UniqueName: \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-nginx-config\") pod \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\" (UID: \"aa5282ed-55a8-4e86-a3ba-35e1432c0a03\") " Apr 24 00:35:34.747385 kubelet[2738]: I0424 00:35:34.746967 2738 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-nginx-config" pod "aa5282ed-55a8-4e86-a3ba-35e1432c0a03" (UID: "aa5282ed-55a8-4e86-a3ba-35e1432c0a03"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:35:34.754652 kubelet[2738]: I0424 00:35:34.754525 2738 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-kube-api-access-q79lg" pod "aa5282ed-55a8-4e86-a3ba-35e1432c0a03" (UID: "aa5282ed-55a8-4e86-a3ba-35e1432c0a03"). InnerVolumeSpecName "kube-api-access-q79lg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:35:34.759415 kubelet[2738]: I0424 00:35:34.759374 2738 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-backend-key-pair" pod "aa5282ed-55a8-4e86-a3ba-35e1432c0a03" (UID: "aa5282ed-55a8-4e86-a3ba-35e1432c0a03"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 00:35:34.759763 kubelet[2738]: I0424 00:35:34.759707 2738 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-ca-bundle" pod "aa5282ed-55a8-4e86-a3ba-35e1432c0a03" (UID: "aa5282ed-55a8-4e86-a3ba-35e1432c0a03"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:35:34.846132 kubelet[2738]: I0424 00:35:34.845791 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-backend-key-pair\") on node \"172-236-108-90\" DevicePath \"\"" Apr 24 00:35:34.846506 kubelet[2738]: I0424 00:35:34.846441 2738 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q79lg\" (UniqueName: \"kubernetes.io/projected/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-kube-api-access-q79lg\") on node \"172-236-108-90\" DevicePath \"\"" Apr 24 00:35:34.846506 kubelet[2738]: I0424 00:35:34.846467 2738 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-nginx-config\") on node \"172-236-108-90\" DevicePath \"\"" Apr 24 00:35:34.846506 kubelet[2738]: I0424 00:35:34.846478 2738 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa5282ed-55a8-4e86-a3ba-35e1432c0a03-whisker-ca-bundle\") on node \"172-236-108-90\" DevicePath \"\"" Apr 24 00:35:34.856822 systemd[1]: var-lib-kubelet-pods-aa5282ed\x2d55a8\x2d4e86\x2da3ba\x2d35e1432c0a03-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 00:35:34.857387 systemd[1]: var-lib-kubelet-pods-aa5282ed\x2d55a8\x2d4e86\x2da3ba\x2d35e1432c0a03-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq79lg.mount: Deactivated successfully. Apr 24 00:35:35.277772 systemd[1]: Created slice kubepods-besteffort-podbb05f61e_d422_416a_8c42_0363cb92c2dc.slice - libcontainer container kubepods-besteffort-podbb05f61e_d422_416a_8c42_0363cb92c2dc.slice. Apr 24 00:35:35.283206 containerd[1559]: time="2026-04-24T00:35:35.283157396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbgbk,Uid:bb05f61e-d422-416a-8c42-0363cb92c2dc,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:35.428405 systemd[1]: Removed slice kubepods-besteffort-podaa5282ed_55a8_4e86_a3ba_35e1432c0a03.slice - libcontainer container kubepods-besteffort-podaa5282ed_55a8_4e86_a3ba_35e1432c0a03.slice. Apr 24 00:35:35.458823 systemd-networkd[1444]: calia44770ef833: Link UP Apr 24 00:35:35.462001 systemd-networkd[1444]: calia44770ef833: Gained carrier Apr 24 00:35:35.471058 kubelet[2738]: I0424 00:35:35.470751 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-tk88k" podStartSLOduration=2.4158759610000002 podStartE2EDuration="14.470739761s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:22.339435829 +0000 UTC m=+18.208611244" lastFinishedPulling="2026-04-24 00:35:34.394299639 +0000 UTC m=+30.263475044" observedRunningTime="2026-04-24 00:35:35.449300202 +0000 UTC m=+31.318475607" watchObservedRunningTime="2026-04-24 00:35:35.470739761 +0000 UTC m=+31.339915166" Apr 24 00:35:35.491803 containerd[1559]: 2026-04-24 00:35:35.318 [ERROR][3769] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:35:35.491803 containerd[1559]: 2026-04-24 00:35:35.338 [INFO][3769] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-csi--node--driver--tbgbk-eth0 csi-node-driver- calico-system bb05f61e-d422-416a-8c42-0363cb92c2dc 751 0 2026-04-24 00:35:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6986d7597d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-108-90 csi-node-driver-tbgbk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia44770ef833 [] [] }} ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-" Apr 24 00:35:35.491803 containerd[1559]: 2026-04-24 00:35:35.338 [INFO][3769] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.491803 containerd[1559]: 2026-04-24 00:35:35.370 [INFO][3780] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" HandleID="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Workload="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.377 [INFO][3780] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" HandleID="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Workload="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285a70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-90", "pod":"csi-node-driver-tbgbk", "timestamp":"2026-04-24 00:35:35.37049041 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00033d080)} Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.377 [INFO][3780] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.377 [INFO][3780] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.377 [INFO][3780] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.380 [INFO][3780] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" host="172-236-108-90" Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.389 [INFO][3780] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.396 [INFO][3780] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.398 [INFO][3780] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:35.492632 containerd[1559]: 2026-04-24 00:35:35.401 [INFO][3780] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.401 [INFO][3780] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" host="172-236-108-90" Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.403 [INFO][3780] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69 Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.407 [INFO][3780] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" host="172-236-108-90" Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.416 [INFO][3780] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.129/26] block=192.168.112.128/26 handle="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" host="172-236-108-90" Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.418 [INFO][3780] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.129/26] handle="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" host="172-236-108-90" Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.418 [INFO][3780] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:35.492962 containerd[1559]: 2026-04-24 00:35:35.418 [INFO][3780] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.129/26] IPv6=[] ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" HandleID="k8s-pod-network.9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Workload="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.493201 containerd[1559]: 2026-04-24 00:35:35.438 [INFO][3769] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-csi--node--driver--tbgbk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb05f61e-d422-416a-8c42-0363cb92c2dc", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"csi-node-driver-tbgbk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia44770ef833", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:35.493337 containerd[1559]: 2026-04-24 00:35:35.439 [INFO][3769] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.129/32] ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.493337 containerd[1559]: 2026-04-24 00:35:35.439 [INFO][3769] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia44770ef833 ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.493337 containerd[1559]: 2026-04-24 00:35:35.463 [INFO][3769] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.493404 containerd[1559]: 2026-04-24 00:35:35.464 [INFO][3769] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-csi--node--driver--tbgbk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"bb05f61e-d422-416a-8c42-0363cb92c2dc", ResourceVersion:"751", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69", Pod:"csi-node-driver-tbgbk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia44770ef833", MAC:"56:1d:80:20:60:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:35.493454 containerd[1559]: 2026-04-24 00:35:35.484 [INFO][3769] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" Namespace="calico-system" Pod="csi-node-driver-tbgbk" WorkloadEndpoint="172--236--108--90-k8s-csi--node--driver--tbgbk-eth0" Apr 24 00:35:35.551504 systemd[1]: Created slice kubepods-besteffort-pod56152124_af94_4e51_bd64_4d8c5b413d89.slice - libcontainer container kubepods-besteffort-pod56152124_af94_4e51_bd64_4d8c5b413d89.slice. Apr 24 00:35:35.557605 kubelet[2738]: I0424 00:35:35.557557 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/56152124-af94-4e51-bd64-4d8c5b413d89-whisker-backend-key-pair\") pod \"whisker-55b899f5bf-7xqg8\" (UID: \"56152124-af94-4e51-bd64-4d8c5b413d89\") " pod="calico-system/whisker-55b899f5bf-7xqg8" Apr 24 00:35:35.557714 kubelet[2738]: I0424 00:35:35.557620 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/56152124-af94-4e51-bd64-4d8c5b413d89-nginx-config\") pod \"whisker-55b899f5bf-7xqg8\" (UID: \"56152124-af94-4e51-bd64-4d8c5b413d89\") " pod="calico-system/whisker-55b899f5bf-7xqg8" Apr 24 00:35:35.557714 kubelet[2738]: I0424 00:35:35.557656 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56152124-af94-4e51-bd64-4d8c5b413d89-whisker-ca-bundle\") pod \"whisker-55b899f5bf-7xqg8\" (UID: \"56152124-af94-4e51-bd64-4d8c5b413d89\") " pod="calico-system/whisker-55b899f5bf-7xqg8" Apr 24 00:35:35.557714 kubelet[2738]: I0424 00:35:35.557678 2738 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxsbq\" (UniqueName: \"kubernetes.io/projected/56152124-af94-4e51-bd64-4d8c5b413d89-kube-api-access-sxsbq\") pod \"whisker-55b899f5bf-7xqg8\" (UID: \"56152124-af94-4e51-bd64-4d8c5b413d89\") " pod="calico-system/whisker-55b899f5bf-7xqg8" Apr 24 00:35:35.560518 containerd[1559]: time="2026-04-24T00:35:35.560435907Z" level=info msg="connecting to shim 9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69" address="unix:///run/containerd/s/e8455e5ad464b9fbf946b25d2a8c6d8ac4da1eecc38a81531469e19c88a9dcca" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:35.622508 systemd[1]: Started cri-containerd-9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69.scope - libcontainer container 9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69. Apr 24 00:35:35.695100 containerd[1559]: time="2026-04-24T00:35:35.694979695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tbgbk,Uid:bb05f61e-d422-416a-8c42-0363cb92c2dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69\"" Apr 24 00:35:35.700625 containerd[1559]: time="2026-04-24T00:35:35.700349982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 24 00:35:35.866430 containerd[1559]: time="2026-04-24T00:35:35.866337649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b899f5bf-7xqg8,Uid:56152124-af94-4e51-bd64-4d8c5b413d89,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:35.987440 systemd-networkd[1444]: calib0d1ffdd3e8: Link UP Apr 24 00:35:35.991311 systemd-networkd[1444]: calib0d1ffdd3e8: Gained carrier Apr 24 00:35:36.007490 containerd[1559]: 2026-04-24 00:35:35.899 [ERROR][3868] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:35:36.007490 containerd[1559]: 2026-04-24 00:35:35.909 [INFO][3868] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0 whisker-55b899f5bf- calico-system 56152124-af94-4e51-bd64-4d8c5b413d89 943 0 2026-04-24 00:35:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55b899f5bf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-236-108-90 whisker-55b899f5bf-7xqg8 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib0d1ffdd3e8 [] [] }} ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-" Apr 24 00:35:36.007490 containerd[1559]: 2026-04-24 00:35:35.909 [INFO][3868] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.007490 containerd[1559]: 2026-04-24 00:35:35.935 [INFO][3880] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" HandleID="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Workload="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.943 [INFO][3880] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" HandleID="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Workload="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ffd20), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-90", "pod":"whisker-55b899f5bf-7xqg8", "timestamp":"2026-04-24 00:35:35.935749396 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003951e0)} Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.943 [INFO][3880] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.943 [INFO][3880] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.943 [INFO][3880] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.946 [INFO][3880] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" host="172-236-108-90" Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.951 [INFO][3880] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.956 [INFO][3880] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.958 [INFO][3880] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:36.007924 containerd[1559]: 2026-04-24 00:35:35.960 [INFO][3880] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.960 [INFO][3880] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" host="172-236-108-90" Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.962 [INFO][3880] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422 Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.965 [INFO][3880] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" host="172-236-108-90" Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.977 [INFO][3880] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.130/26] block=192.168.112.128/26 handle="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" host="172-236-108-90" Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.979 [INFO][3880] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.130/26] handle="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" host="172-236-108-90" Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.979 [INFO][3880] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:36.008107 containerd[1559]: 2026-04-24 00:35:35.979 [INFO][3880] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.130/26] IPv6=[] ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" HandleID="k8s-pod-network.16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Workload="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.008231 containerd[1559]: 2026-04-24 00:35:35.982 [INFO][3868] cni-plugin/k8s.go 418: Populated endpoint ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0", GenerateName:"whisker-55b899f5bf-", Namespace:"calico-system", SelfLink:"", UID:"56152124-af94-4e51-bd64-4d8c5b413d89", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55b899f5bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"whisker-55b899f5bf-7xqg8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0d1ffdd3e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:36.008231 containerd[1559]: 2026-04-24 00:35:35.983 [INFO][3868] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.130/32] ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.008320 containerd[1559]: 2026-04-24 00:35:35.983 [INFO][3868] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0d1ffdd3e8 ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.008320 containerd[1559]: 2026-04-24 00:35:35.992 [INFO][3868] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.008368 containerd[1559]: 2026-04-24 00:35:35.994 [INFO][3868] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0", GenerateName:"whisker-55b899f5bf-", Namespace:"calico-system", SelfLink:"", UID:"56152124-af94-4e51-bd64-4d8c5b413d89", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55b899f5bf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422", Pod:"whisker-55b899f5bf-7xqg8", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib0d1ffdd3e8", MAC:"32:af:9a:b3:59:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:36.008417 containerd[1559]: 2026-04-24 00:35:36.003 [INFO][3868] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" Namespace="calico-system" Pod="whisker-55b899f5bf-7xqg8" WorkloadEndpoint="172--236--108--90-k8s-whisker--55b899f5bf--7xqg8-eth0" Apr 24 00:35:36.055874 containerd[1559]: time="2026-04-24T00:35:36.055820520Z" level=info msg="connecting to shim 16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422" address="unix:///run/containerd/s/ae28530867bdccc4b4e8a07c06a4856c046c65d5149f5bbd3c097ffd3ed1be84" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:36.098420 systemd[1]: Started cri-containerd-16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422.scope - libcontainer container 16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422. Apr 24 00:35:36.202858 containerd[1559]: time="2026-04-24T00:35:36.202816081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55b899f5bf-7xqg8,Uid:56152124-af94-4e51-bd64-4d8c5b413d89,Namespace:calico-system,Attempt:0,} returns sandbox id \"16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422\"" Apr 24 00:35:36.273318 kubelet[2738]: I0424 00:35:36.273177 2738 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="aa5282ed-55a8-4e86-a3ba-35e1432c0a03" path="/var/lib/kubelet/pods/aa5282ed-55a8-4e86-a3ba-35e1432c0a03/volumes" Apr 24 00:35:36.729136 systemd-networkd[1444]: calia44770ef833: Gained IPv6LL Apr 24 00:35:37.629639 containerd[1559]: time="2026-04-24T00:35:37.629577395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:37.631047 containerd[1559]: time="2026-04-24T00:35:37.631021145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 24 00:35:37.631803 containerd[1559]: time="2026-04-24T00:35:37.631782210Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:37.634459 containerd[1559]: time="2026-04-24T00:35:37.634418102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:37.635505 containerd[1559]: time="2026-04-24T00:35:37.635484524Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 1.935102793s" Apr 24 00:35:37.635753 containerd[1559]: time="2026-04-24T00:35:37.635588644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 24 00:35:37.637265 containerd[1559]: time="2026-04-24T00:35:37.637234262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 24 00:35:37.641173 containerd[1559]: time="2026-04-24T00:35:37.641086045Z" level=info msg="CreateContainer within sandbox \"9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 00:35:37.651570 containerd[1559]: time="2026-04-24T00:35:37.651536324Z" level=info msg="Container fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:37.660080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount74997851.mount: Deactivated successfully. Apr 24 00:35:37.673278 containerd[1559]: time="2026-04-24T00:35:37.673247304Z" level=info msg="CreateContainer within sandbox \"9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552\"" Apr 24 00:35:37.674101 containerd[1559]: time="2026-04-24T00:35:37.674032799Z" level=info msg="StartContainer for \"fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552\"" Apr 24 00:35:37.675538 containerd[1559]: time="2026-04-24T00:35:37.675514769Z" level=info msg="connecting to shim fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552" address="unix:///run/containerd/s/e8455e5ad464b9fbf946b25d2a8c6d8ac4da1eecc38a81531469e19c88a9dcca" protocol=ttrpc version=3 Apr 24 00:35:37.700452 systemd[1]: Started cri-containerd-fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552.scope - libcontainer container fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552. Apr 24 00:35:37.796768 containerd[1559]: time="2026-04-24T00:35:37.796724075Z" level=info msg="StartContainer for \"fd3c6f6d1ae429f081a8ed557d6425d65024681da160fe6a219c477c67fbe552\" returns successfully" Apr 24 00:35:38.008678 systemd-networkd[1444]: calib0d1ffdd3e8: Gained IPv6LL Apr 24 00:35:38.812668 containerd[1559]: time="2026-04-24T00:35:38.812612645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:38.813743 containerd[1559]: time="2026-04-24T00:35:38.813523649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 24 00:35:38.814255 containerd[1559]: time="2026-04-24T00:35:38.814234304Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:38.815732 containerd[1559]: time="2026-04-24T00:35:38.815710615Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:38.816311 containerd[1559]: time="2026-04-24T00:35:38.816250661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 1.178920909s" Apr 24 00:35:38.816371 containerd[1559]: time="2026-04-24T00:35:38.816302731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 24 00:35:38.817169 containerd[1559]: time="2026-04-24T00:35:38.817145425Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 24 00:35:38.820714 containerd[1559]: time="2026-04-24T00:35:38.820683273Z" level=info msg="CreateContainer within sandbox \"16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 00:35:38.827329 containerd[1559]: time="2026-04-24T00:35:38.826786964Z" level=info msg="Container 2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:38.836777 containerd[1559]: time="2026-04-24T00:35:38.836750551Z" level=info msg="CreateContainer within sandbox \"16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781\"" Apr 24 00:35:38.838330 containerd[1559]: time="2026-04-24T00:35:38.838307820Z" level=info msg="StartContainer for \"2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781\"" Apr 24 00:35:38.839596 containerd[1559]: time="2026-04-24T00:35:38.839405353Z" level=info msg="connecting to shim 2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781" address="unix:///run/containerd/s/ae28530867bdccc4b4e8a07c06a4856c046c65d5149f5bbd3c097ffd3ed1be84" protocol=ttrpc version=3 Apr 24 00:35:38.860417 systemd[1]: Started cri-containerd-2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781.scope - libcontainer container 2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781. Apr 24 00:35:38.913310 containerd[1559]: time="2026-04-24T00:35:38.913251422Z" level=info msg="StartContainer for \"2e645f1e0173b0f53c5166e3a5de66ef484215c30ba7f8362b9680cb4b58a781\" returns successfully" Apr 24 00:35:40.928770 containerd[1559]: time="2026-04-24T00:35:40.928705479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:40.929862 containerd[1559]: time="2026-04-24T00:35:40.929740613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 24 00:35:40.930466 containerd[1559]: time="2026-04-24T00:35:40.930439089Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:40.932178 containerd[1559]: time="2026-04-24T00:35:40.932145420Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:40.932877 containerd[1559]: time="2026-04-24T00:35:40.932853326Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 2.115682841s" Apr 24 00:35:40.932957 containerd[1559]: time="2026-04-24T00:35:40.932942625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 24 00:35:40.936044 containerd[1559]: time="2026-04-24T00:35:40.936012129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 24 00:35:40.938830 containerd[1559]: time="2026-04-24T00:35:40.938785343Z" level=info msg="CreateContainer within sandbox \"9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 00:35:40.948374 containerd[1559]: time="2026-04-24T00:35:40.948219172Z" level=info msg="Container 3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:40.955798 containerd[1559]: time="2026-04-24T00:35:40.955766890Z" level=info msg="CreateContainer within sandbox \"9640039ffb467c12b6ec124f725a85aaf268831dff4b580acb829476ea1eed69\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a\"" Apr 24 00:35:40.956678 containerd[1559]: time="2026-04-24T00:35:40.956311947Z" level=info msg="StartContainer for \"3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a\"" Apr 24 00:35:40.958112 containerd[1559]: time="2026-04-24T00:35:40.958080348Z" level=info msg="connecting to shim 3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a" address="unix:///run/containerd/s/e8455e5ad464b9fbf946b25d2a8c6d8ac4da1eecc38a81531469e19c88a9dcca" protocol=ttrpc version=3 Apr 24 00:35:40.987580 systemd[1]: Started cri-containerd-3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a.scope - libcontainer container 3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a. Apr 24 00:35:41.070534 containerd[1559]: time="2026-04-24T00:35:41.070451639Z" level=info msg="StartContainer for \"3b5944d47894fc6f77703b16a733d624f82cb943b4fcfb29d7894a2bb922468a\" returns successfully" Apr 24 00:35:41.345317 kubelet[2738]: I0424 00:35:41.345184 2738 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 00:35:41.345317 kubelet[2738]: I0424 00:35:41.345217 2738 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 00:35:41.914795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318742236.mount: Deactivated successfully. Apr 24 00:35:41.929244 containerd[1559]: time="2026-04-24T00:35:41.928669643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:41.929244 containerd[1559]: time="2026-04-24T00:35:41.929216490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 24 00:35:41.930794 containerd[1559]: time="2026-04-24T00:35:41.930771442Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:41.936966 containerd[1559]: time="2026-04-24T00:35:41.936940360Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:41.937828 containerd[1559]: time="2026-04-24T00:35:41.937419498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 1.001371379s" Apr 24 00:35:41.938118 containerd[1559]: time="2026-04-24T00:35:41.938101444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 24 00:35:41.941690 containerd[1559]: time="2026-04-24T00:35:41.941669466Z" level=info msg="CreateContainer within sandbox \"16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 00:35:41.948173 containerd[1559]: time="2026-04-24T00:35:41.947456367Z" level=info msg="Container d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:41.974198 containerd[1559]: time="2026-04-24T00:35:41.974165121Z" level=info msg="CreateContainer within sandbox \"16c08c5a3b90edc2d5b89a0809c5a16ea3a09e8f01fa46069c268497f2fec422\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2\"" Apr 24 00:35:41.975474 containerd[1559]: time="2026-04-24T00:35:41.975444605Z" level=info msg="StartContainer for \"d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2\"" Apr 24 00:35:41.977049 containerd[1559]: time="2026-04-24T00:35:41.977015456Z" level=info msg="connecting to shim d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2" address="unix:///run/containerd/s/ae28530867bdccc4b4e8a07c06a4856c046c65d5149f5bbd3c097ffd3ed1be84" protocol=ttrpc version=3 Apr 24 00:35:42.001455 systemd[1]: Started cri-containerd-d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2.scope - libcontainer container d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2. Apr 24 00:35:42.064962 containerd[1559]: time="2026-04-24T00:35:42.064456537Z" level=info msg="StartContainer for \"d471edc14dfa49b8a2bde9e1aa291477bc1f177d3b9444d707c55613ec831ae2\" returns successfully" Apr 24 00:35:42.459310 kubelet[2738]: I0424 00:35:42.457666 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-55b899f5bf-7xqg8" podStartSLOduration=1.7251580789999998 podStartE2EDuration="7.457654205s" podCreationTimestamp="2026-04-24 00:35:35 +0000 UTC" firstStartedPulling="2026-04-24 00:35:36.206526124 +0000 UTC m=+32.075701539" lastFinishedPulling="2026-04-24 00:35:41.93902226 +0000 UTC m=+37.808197665" observedRunningTime="2026-04-24 00:35:42.457565786 +0000 UTC m=+38.326741201" watchObservedRunningTime="2026-04-24 00:35:42.457654205 +0000 UTC m=+38.326829610" Apr 24 00:35:42.459310 kubelet[2738]: I0424 00:35:42.457848 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-tbgbk" podStartSLOduration=16.22381452 podStartE2EDuration="21.457843904s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:35.699780186 +0000 UTC m=+31.568955591" lastFinishedPulling="2026-04-24 00:35:40.93380957 +0000 UTC m=+36.802984975" observedRunningTime="2026-04-24 00:35:41.460951772 +0000 UTC m=+37.330127217" watchObservedRunningTime="2026-04-24 00:35:42.457843904 +0000 UTC m=+38.327019309" Apr 24 00:35:43.014705 kubelet[2738]: I0424 00:35:43.014672 2738 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:35:43.016380 kubelet[2738]: E0424 00:35:43.015184 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:43.452361 kubelet[2738]: E0424 00:35:43.452325 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:44.390032 systemd-networkd[1444]: vxlan.calico: Link UP Apr 24 00:35:44.390045 systemd-networkd[1444]: vxlan.calico: Gained carrier Apr 24 00:35:45.272902 containerd[1559]: time="2026-04-24T00:35:45.272848674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-znzx7,Uid:0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:45.423830 systemd-networkd[1444]: cali3a8732103cf: Link UP Apr 24 00:35:45.424978 systemd-networkd[1444]: cali3a8732103cf: Gained carrier Apr 24 00:35:45.450803 containerd[1559]: 2026-04-24 00:35:45.320 [INFO][4501] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0 calico-apiserver-7bd6c8766- calico-system 0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416 887 0 2026-04-24 00:35:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bd6c8766 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-108-90 calico-apiserver-7bd6c8766-znzx7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3a8732103cf [] [] }} ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-" Apr 24 00:35:45.450803 containerd[1559]: 2026-04-24 00:35:45.320 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.450803 containerd[1559]: 2026-04-24 00:35:45.357 [INFO][4514] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" HandleID="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Workload="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.365 [INFO][4514] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" HandleID="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Workload="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3de0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-90", "pod":"calico-apiserver-7bd6c8766-znzx7", "timestamp":"2026-04-24 00:35:45.357204142 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000379b80)} Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.365 [INFO][4514] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.365 [INFO][4514] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.365 [INFO][4514] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.368 [INFO][4514] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" host="172-236-108-90" Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.373 [INFO][4514] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.378 [INFO][4514] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.380 [INFO][4514] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:45.451075 containerd[1559]: 2026-04-24 00:35:45.381 [INFO][4514] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.381 [INFO][4514] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" host="172-236-108-90" Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.383 [INFO][4514] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.389 [INFO][4514] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" host="172-236-108-90" Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.414 [INFO][4514] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.131/26] block=192.168.112.128/26 handle="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" host="172-236-108-90" Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.414 [INFO][4514] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.131/26] handle="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" host="172-236-108-90" Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.414 [INFO][4514] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:45.451801 containerd[1559]: 2026-04-24 00:35:45.414 [INFO][4514] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.131/26] IPv6=[] ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" HandleID="k8s-pod-network.e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Workload="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.452269 containerd[1559]: 2026-04-24 00:35:45.419 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0", GenerateName:"calico-apiserver-7bd6c8766-", Namespace:"calico-system", SelfLink:"", UID:"0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd6c8766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"calico-apiserver-7bd6c8766-znzx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3a8732103cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:45.452446 containerd[1559]: 2026-04-24 00:35:45.419 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.131/32] ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.452446 containerd[1559]: 2026-04-24 00:35:45.419 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a8732103cf ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.452446 containerd[1559]: 2026-04-24 00:35:45.425 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.453182 containerd[1559]: 2026-04-24 00:35:45.426 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0", GenerateName:"calico-apiserver-7bd6c8766-", Namespace:"calico-system", SelfLink:"", UID:"0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd6c8766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc", Pod:"calico-apiserver-7bd6c8766-znzx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3a8732103cf", MAC:"ea:83:ae:bd:90:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:45.453472 containerd[1559]: 2026-04-24 00:35:45.444 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-znzx7" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--znzx7-eth0" Apr 24 00:35:45.495322 containerd[1559]: time="2026-04-24T00:35:45.494999762Z" level=info msg="connecting to shim e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc" address="unix:///run/containerd/s/81515ff8f65fcc5fab5d3c68165c7751eb01608bd2553f11259718ac32533ce7" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:45.536528 systemd[1]: Started cri-containerd-e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc.scope - libcontainer container e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc. Apr 24 00:35:45.607992 containerd[1559]: time="2026-04-24T00:35:45.607939243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-znzx7,Uid:0d2d0b97-06b2-4b96-a4e9-02b2e0a0e416,Namespace:calico-system,Attempt:0,} returns sandbox id \"e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc\"" Apr 24 00:35:45.611511 containerd[1559]: time="2026-04-24T00:35:45.611470861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 00:35:46.392555 systemd-networkd[1444]: vxlan.calico: Gained IPv6LL Apr 24 00:35:47.034934 systemd-networkd[1444]: cali3a8732103cf: Gained IPv6LL Apr 24 00:35:47.275929 containerd[1559]: time="2026-04-24T00:35:47.275807409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-8k6qz,Uid:b4175a70-02ad-4cf0-b71f-e891c587fabf,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:47.278258 kubelet[2738]: E0424 00:35:47.277016 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:47.278258 kubelet[2738]: E0424 00:35:47.277662 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:47.282648 containerd[1559]: time="2026-04-24T00:35:47.279746277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qd789,Uid:256a07d0-0f83-4c59-8ae7-541bcd7973d3,Namespace:kube-system,Attempt:0,}" Apr 24 00:35:47.291643 containerd[1559]: time="2026-04-24T00:35:47.290491993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mm977,Uid:4ef68e2d-3d67-4e31-854d-8266f70925bb,Namespace:kube-system,Attempt:0,}" Apr 24 00:35:47.596491 systemd-networkd[1444]: cali35b3c6d7363: Link UP Apr 24 00:35:47.598210 systemd-networkd[1444]: cali35b3c6d7363: Gained carrier Apr 24 00:35:47.622319 containerd[1559]: 2026-04-24 00:35:47.456 [INFO][4590] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0 coredns-7d764666f9- kube-system 256a07d0-0f83-4c59-8ae7-541bcd7973d3 881 0 2026-04-24 00:35:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-108-90 coredns-7d764666f9-qd789 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali35b3c6d7363 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-" Apr 24 00:35:47.622319 containerd[1559]: 2026-04-24 00:35:47.457 [INFO][4590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.622319 containerd[1559]: 2026-04-24 00:35:47.521 [INFO][4621] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" HandleID="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Workload="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.531 [INFO][4621] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" HandleID="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Workload="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002857e0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-108-90", "pod":"coredns-7d764666f9-qd789", "timestamp":"2026-04-24 00:35:47.521911388 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f4f20)} Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.532 [INFO][4621] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.532 [INFO][4621] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.532 [INFO][4621] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.535 [INFO][4621] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" host="172-236-108-90" Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.541 [INFO][4621] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.550 [INFO][4621] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.553 [INFO][4621] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.622522 containerd[1559]: 2026-04-24 00:35:47.562 [INFO][4621] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.562 [INFO][4621] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" host="172-236-108-90" Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.568 [INFO][4621] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27 Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.575 [INFO][4621] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" host="172-236-108-90" Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.587 [INFO][4621] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.132/26] block=192.168.112.128/26 handle="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" host="172-236-108-90" Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.587 [INFO][4621] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.132/26] handle="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" host="172-236-108-90" Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.587 [INFO][4621] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:47.622731 containerd[1559]: 2026-04-24 00:35:47.587 [INFO][4621] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.132/26] IPv6=[] ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" HandleID="k8s-pod-network.19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Workload="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.622863 containerd[1559]: 2026-04-24 00:35:47.592 [INFO][4590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"256a07d0-0f83-4c59-8ae7-541bcd7973d3", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"coredns-7d764666f9-qd789", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35b3c6d7363", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:47.622863 containerd[1559]: 2026-04-24 00:35:47.592 [INFO][4590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.132/32] ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.622863 containerd[1559]: 2026-04-24 00:35:47.592 [INFO][4590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35b3c6d7363 ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.622863 containerd[1559]: 2026-04-24 00:35:47.597 [INFO][4590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.622863 containerd[1559]: 2026-04-24 00:35:47.598 [INFO][4590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"256a07d0-0f83-4c59-8ae7-541bcd7973d3", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27", Pod:"coredns-7d764666f9-qd789", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali35b3c6d7363", MAC:"8e:6e:31:c0:10:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:47.622863 containerd[1559]: 2026-04-24 00:35:47.615 [INFO][4590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" Namespace="kube-system" Pod="coredns-7d764666f9-qd789" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--qd789-eth0" Apr 24 00:35:47.673082 containerd[1559]: time="2026-04-24T00:35:47.672221726Z" level=info msg="connecting to shim 19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27" address="unix:///run/containerd/s/ecf76e59b42f96a3a6e7e42e01e5c699396caa5ceaa8498fb633fe4f57fcf0f0" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:47.740776 systemd-networkd[1444]: cali8fd9bf14d8d: Link UP Apr 24 00:35:47.743335 systemd-networkd[1444]: cali8fd9bf14d8d: Gained carrier Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.420 [INFO][4578] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0 calico-apiserver-7bd6c8766- calico-system b4175a70-02ad-4cf0-b71f-e891c587fabf 882 0 2026-04-24 00:35:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bd6c8766 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-236-108-90 calico-apiserver-7bd6c8766-8k6qz eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali8fd9bf14d8d [] [] }} ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.420 [INFO][4578] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.561 [INFO][4613] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" HandleID="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Workload="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.569 [INFO][4613] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" HandleID="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Workload="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031f860), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-90", "pod":"calico-apiserver-7bd6c8766-8k6qz", "timestamp":"2026-04-24 00:35:47.561483484 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000442dc0)} Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.569 [INFO][4613] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.588 [INFO][4613] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.588 [INFO][4613] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.637 [INFO][4613] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.663 [INFO][4613] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.676 [INFO][4613] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.680 [INFO][4613] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.684 [INFO][4613] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.684 [INFO][4613] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.687 [INFO][4613] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4 Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.694 [INFO][4613] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.707 [INFO][4613] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.133/26] block=192.168.112.128/26 handle="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.707 [INFO][4613] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.133/26] handle="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" host="172-236-108-90" Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.708 [INFO][4613] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:47.772617 containerd[1559]: 2026-04-24 00:35:47.708 [INFO][4613] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.133/26] IPv6=[] ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" HandleID="k8s-pod-network.07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Workload="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.773112 containerd[1559]: 2026-04-24 00:35:47.724 [INFO][4578] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0", GenerateName:"calico-apiserver-7bd6c8766-", Namespace:"calico-system", SelfLink:"", UID:"b4175a70-02ad-4cf0-b71f-e891c587fabf", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd6c8766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"calico-apiserver-7bd6c8766-8k6qz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8fd9bf14d8d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:47.773112 containerd[1559]: 2026-04-24 00:35:47.724 [INFO][4578] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.133/32] ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.773112 containerd[1559]: 2026-04-24 00:35:47.724 [INFO][4578] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fd9bf14d8d ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.773112 containerd[1559]: 2026-04-24 00:35:47.750 [INFO][4578] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.773112 containerd[1559]: 2026-04-24 00:35:47.751 [INFO][4578] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0", GenerateName:"calico-apiserver-7bd6c8766-", Namespace:"calico-system", SelfLink:"", UID:"b4175a70-02ad-4cf0-b71f-e891c587fabf", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd6c8766", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4", Pod:"calico-apiserver-7bd6c8766-8k6qz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali8fd9bf14d8d", MAC:"02:4a:af:c0:7d:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:47.773112 containerd[1559]: 2026-04-24 00:35:47.767 [INFO][4578] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" Namespace="calico-system" Pod="calico-apiserver-7bd6c8766-8k6qz" WorkloadEndpoint="172--236--108--90-k8s-calico--apiserver--7bd6c8766--8k6qz-eth0" Apr 24 00:35:47.777584 systemd[1]: Started cri-containerd-19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27.scope - libcontainer container 19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27. Apr 24 00:35:47.828678 systemd-networkd[1444]: cali651600196c8: Link UP Apr 24 00:35:47.835023 systemd-networkd[1444]: cali651600196c8: Gained carrier Apr 24 00:35:47.853960 containerd[1559]: time="2026-04-24T00:35:47.853533949Z" level=info msg="connecting to shim 07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4" address="unix:///run/containerd/s/0a5669d335aa8ba82411a527eb732a19d3a6b703b015b265e97150ce4e162c02" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.447 [INFO][4587] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0 coredns-7d764666f9- kube-system 4ef68e2d-3d67-4e31-854d-8266f70925bb 884 0 2026-04-24 00:35:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-108-90 coredns-7d764666f9-mm977 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali651600196c8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.451 [INFO][4587] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.627 [INFO][4619] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" HandleID="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Workload="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.666 [INFO][4619] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" HandleID="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Workload="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003889c0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-108-90", "pod":"coredns-7d764666f9-mm977", "timestamp":"2026-04-24 00:35:47.627515917 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000370000)} Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.667 [INFO][4619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.710 [INFO][4619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.710 [INFO][4619] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.737 [INFO][4619] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.749 [INFO][4619] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.784 [INFO][4619] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.790 [INFO][4619] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.792 [INFO][4619] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.793 [INFO][4619] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.795 [INFO][4619] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.799 [INFO][4619] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.807 [INFO][4619] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.134/26] block=192.168.112.128/26 handle="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.808 [INFO][4619] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.134/26] handle="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" host="172-236-108-90" Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.808 [INFO][4619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:47.868921 containerd[1559]: 2026-04-24 00:35:47.808 [INFO][4619] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.134/26] IPv6=[] ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" HandleID="k8s-pod-network.161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Workload="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.869485 containerd[1559]: 2026-04-24 00:35:47.813 [INFO][4587] cni-plugin/k8s.go 418: Populated endpoint ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4ef68e2d-3d67-4e31-854d-8266f70925bb", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"coredns-7d764666f9-mm977", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali651600196c8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:47.869485 containerd[1559]: 2026-04-24 00:35:47.814 [INFO][4587] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.134/32] ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.869485 containerd[1559]: 2026-04-24 00:35:47.814 [INFO][4587] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali651600196c8 ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.869485 containerd[1559]: 2026-04-24 00:35:47.836 [INFO][4587] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.869485 containerd[1559]: 2026-04-24 00:35:47.836 [INFO][4587] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4ef68e2d-3d67-4e31-854d-8266f70925bb", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db", Pod:"coredns-7d764666f9-mm977", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali651600196c8", MAC:"ca:3f:00:18:47:2c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:47.869485 containerd[1559]: 2026-04-24 00:35:47.862 [INFO][4587] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" Namespace="kube-system" Pod="coredns-7d764666f9-mm977" WorkloadEndpoint="172--236--108--90-k8s-coredns--7d764666f9--mm977-eth0" Apr 24 00:35:47.925854 systemd[1]: Started cri-containerd-07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4.scope - libcontainer container 07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4. Apr 24 00:35:47.933945 containerd[1559]: time="2026-04-24T00:35:47.933852487Z" level=info msg="connecting to shim 161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db" address="unix:///run/containerd/s/3718205717836f54d1919aa816392c64f3438f5ccc18c404098f094e7003de98" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:47.938384 containerd[1559]: time="2026-04-24T00:35:47.938258963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-qd789,Uid:256a07d0-0f83-4c59-8ae7-541bcd7973d3,Namespace:kube-system,Attempt:0,} returns sandbox id \"19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27\"" Apr 24 00:35:47.940617 kubelet[2738]: E0424 00:35:47.940590 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:47.948480 containerd[1559]: time="2026-04-24T00:35:47.948340151Z" level=info msg="CreateContainer within sandbox \"19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:35:47.969044 containerd[1559]: time="2026-04-24T00:35:47.968347398Z" level=info msg="Container 788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:47.977689 containerd[1559]: time="2026-04-24T00:35:47.977647210Z" level=info msg="CreateContainer within sandbox \"19c0e3a234eaeea0a984766bcf72fa9396be00072ba19a3a62dbe3af7914eb27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95\"" Apr 24 00:35:47.979304 containerd[1559]: time="2026-04-24T00:35:47.979265754Z" level=info msg="StartContainer for \"788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95\"" Apr 24 00:35:47.985056 containerd[1559]: time="2026-04-24T00:35:47.985008816Z" level=info msg="connecting to shim 788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95" address="unix:///run/containerd/s/ecf76e59b42f96a3a6e7e42e01e5c699396caa5ceaa8498fb633fe4f57fcf0f0" protocol=ttrpc version=3 Apr 24 00:35:47.995443 systemd[1]: Started cri-containerd-161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db.scope - libcontainer container 161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db. Apr 24 00:35:48.013833 systemd[1]: Started cri-containerd-788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95.scope - libcontainer container 788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95. Apr 24 00:35:48.106330 containerd[1559]: time="2026-04-24T00:35:48.106063045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd6c8766-8k6qz,Uid:b4175a70-02ad-4cf0-b71f-e891c587fabf,Namespace:calico-system,Attempt:0,} returns sandbox id \"07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4\"" Apr 24 00:35:48.116093 containerd[1559]: time="2026-04-24T00:35:48.115741247Z" level=info msg="StartContainer for \"788c6c6e9c2a1a973a0320fd356ed90a5c905a19e1c98b955fddfcadb1686b95\" returns successfully" Apr 24 00:35:48.132974 containerd[1559]: time="2026-04-24T00:35:48.132934627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-mm977,Uid:4ef68e2d-3d67-4e31-854d-8266f70925bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db\"" Apr 24 00:35:48.133952 kubelet[2738]: E0424 00:35:48.133922 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:48.144512 containerd[1559]: time="2026-04-24T00:35:48.144001816Z" level=info msg="CreateContainer within sandbox \"161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:35:48.157340 containerd[1559]: time="2026-04-24T00:35:48.157310667Z" level=info msg="Container 454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:48.164360 containerd[1559]: time="2026-04-24T00:35:48.164328317Z" level=info msg="CreateContainer within sandbox \"161e42ba62f1680dc43831d126d93d0492e2a4cd1943286da1cfeb11f04e95db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc\"" Apr 24 00:35:48.165652 containerd[1559]: time="2026-04-24T00:35:48.165624074Z" level=info msg="StartContainer for \"454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc\"" Apr 24 00:35:48.167669 containerd[1559]: time="2026-04-24T00:35:48.167618328Z" level=info msg="connecting to shim 454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc" address="unix:///run/containerd/s/3718205717836f54d1919aa816392c64f3438f5ccc18c404098f094e7003de98" protocol=ttrpc version=3 Apr 24 00:35:48.197734 systemd[1]: Started cri-containerd-454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc.scope - libcontainer container 454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc. Apr 24 00:35:48.263312 containerd[1559]: time="2026-04-24T00:35:48.263255723Z" level=info msg="StartContainer for \"454c6dcd402265120f4ebba5a45ddd77f77fdb454bcb0a6fdfded4f1971846cc\" returns successfully" Apr 24 00:35:48.274711 containerd[1559]: time="2026-04-24T00:35:48.274536110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cfb95b74-h2dr7,Uid:1b02a44c-0d48-44e4-9230-c06c8d011820,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:48.276317 containerd[1559]: time="2026-04-24T00:35:48.275589007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gg2dd,Uid:7bc7331d-2c65-432e-a66d-716f0351f0c4,Namespace:calico-system,Attempt:0,}" Apr 24 00:35:48.490988 kubelet[2738]: E0424 00:35:48.490963 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:48.501471 kubelet[2738]: E0424 00:35:48.501179 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:48.518156 kubelet[2738]: I0424 00:35:48.517675 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-mm977" podStartSLOduration=37.517661342 podStartE2EDuration="37.517661342s" podCreationTimestamp="2026-04-24 00:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:35:48.516240575 +0000 UTC m=+44.385415980" watchObservedRunningTime="2026-04-24 00:35:48.517661342 +0000 UTC m=+44.386836747" Apr 24 00:35:48.573861 kubelet[2738]: I0424 00:35:48.573811 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-qd789" podStartSLOduration=37.57380084 podStartE2EDuration="37.57380084s" podCreationTimestamp="2026-04-24 00:35:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:35:48.568748095 +0000 UTC m=+44.437923540" watchObservedRunningTime="2026-04-24 00:35:48.57380084 +0000 UTC m=+44.442976245" Apr 24 00:35:48.626545 systemd-networkd[1444]: caliea3f8f7796b: Link UP Apr 24 00:35:48.628357 systemd-networkd[1444]: caliea3f8f7796b: Gained carrier Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.368 [INFO][4862] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0 calico-kube-controllers-85cfb95b74- calico-system 1b02a44c-0d48-44e4-9230-c06c8d011820 886 0 2026-04-24 00:35:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85cfb95b74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-108-90 calico-kube-controllers-85cfb95b74-h2dr7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliea3f8f7796b [] [] }} ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.368 [INFO][4862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.476 [INFO][4887] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" HandleID="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Workload="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.518 [INFO][4887] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" HandleID="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Workload="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051e40), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-90", "pod":"calico-kube-controllers-85cfb95b74-h2dr7", "timestamp":"2026-04-24 00:35:48.476143011 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000300420)} Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.518 [INFO][4887] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.519 [INFO][4887] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.519 [INFO][4887] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.530 [INFO][4887] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.554 [INFO][4887] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.577 [INFO][4887] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.586 [INFO][4887] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.589 [INFO][4887] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.589 [INFO][4887] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.592 [INFO][4887] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69 Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.597 [INFO][4887] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.606 [INFO][4887] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.135/26] block=192.168.112.128/26 handle="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.606 [INFO][4887] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.135/26] handle="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" host="172-236-108-90" Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.606 [INFO][4887] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:48.675242 containerd[1559]: 2026-04-24 00:35:48.606 [INFO][4887] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.135/26] IPv6=[] ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" HandleID="k8s-pod-network.d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Workload="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.676549 containerd[1559]: 2026-04-24 00:35:48.614 [INFO][4862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0", GenerateName:"calico-kube-controllers-85cfb95b74-", Namespace:"calico-system", SelfLink:"", UID:"1b02a44c-0d48-44e4-9230-c06c8d011820", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cfb95b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"calico-kube-controllers-85cfb95b74-h2dr7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea3f8f7796b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:48.676549 containerd[1559]: 2026-04-24 00:35:48.614 [INFO][4862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.135/32] ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.676549 containerd[1559]: 2026-04-24 00:35:48.614 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea3f8f7796b ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.676549 containerd[1559]: 2026-04-24 00:35:48.630 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.676549 containerd[1559]: 2026-04-24 00:35:48.630 [INFO][4862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0", GenerateName:"calico-kube-controllers-85cfb95b74-", Namespace:"calico-system", SelfLink:"", UID:"1b02a44c-0d48-44e4-9230-c06c8d011820", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cfb95b74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69", Pod:"calico-kube-controllers-85cfb95b74-h2dr7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliea3f8f7796b", MAC:"52:dc:65:fb:75:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:48.676549 containerd[1559]: 2026-04-24 00:35:48.662 [INFO][4862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" Namespace="calico-system" Pod="calico-kube-controllers-85cfb95b74-h2dr7" WorkloadEndpoint="172--236--108--90-k8s-calico--kube--controllers--85cfb95b74--h2dr7-eth0" Apr 24 00:35:48.719368 systemd-networkd[1444]: califf1f0d34292: Link UP Apr 24 00:35:48.719657 systemd-networkd[1444]: califf1f0d34292: Gained carrier Apr 24 00:35:48.738876 containerd[1559]: time="2026-04-24T00:35:48.738242507Z" level=info msg="connecting to shim d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69" address="unix:///run/containerd/s/c9185a1a7428a70fdf5464524c41964f06de0b43c27ea288a6bb7e015f9958c9" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.429 [INFO][4863] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0 goldmane-7fb6cdc5d9- calico-system 7bc7331d-2c65-432e-a66d-716f0351f0c4 885 0 2026-04-24 00:35:21 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7fb6cdc5d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-236-108-90 goldmane-7fb6cdc5d9-gg2dd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califf1f0d34292 [] [] }} ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.429 [INFO][4863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.499 [INFO][4901] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" HandleID="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Workload="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.521 [INFO][4901] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" HandleID="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Workload="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b54f0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-108-90", "pod":"goldmane-7fb6cdc5d9-gg2dd", "timestamp":"2026-04-24 00:35:48.499426863 +0000 UTC"}, Hostname:"172-236-108-90", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003bc580)} Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.521 [INFO][4901] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.606 [INFO][4901] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.606 [INFO][4901] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-108-90' Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.633 [INFO][4901] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.656 [INFO][4901] ipam/ipam.go 409: Looking up existing affinities for host host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.673 [INFO][4901] ipam/ipam.go 526: Trying affinity for 192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.676 [INFO][4901] ipam/ipam.go 160: Attempting to load block cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.680 [INFO][4901] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.680 [INFO][4901] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.687 [INFO][4901] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.692 [INFO][4901] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.703 [INFO][4901] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.112.136/26] block=192.168.112.128/26 handle="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.705 [INFO][4901] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.112.136/26] handle="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" host="172-236-108-90" Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.706 [INFO][4901] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:35:48.748073 containerd[1559]: 2026-04-24 00:35:48.706 [INFO][4901] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.112.136/26] IPv6=[] ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" HandleID="k8s-pod-network.5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Workload="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.748628 containerd[1559]: 2026-04-24 00:35:48.710 [INFO][4863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"7bc7331d-2c65-432e-a66d-716f0351f0c4", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"", Pod:"goldmane-7fb6cdc5d9-gg2dd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califf1f0d34292", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:48.748628 containerd[1559]: 2026-04-24 00:35:48.711 [INFO][4863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.136/32] ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.748628 containerd[1559]: 2026-04-24 00:35:48.711 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf1f0d34292 ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.748628 containerd[1559]: 2026-04-24 00:35:48.717 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.748628 containerd[1559]: 2026-04-24 00:35:48.721 [INFO][4863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"7bc7331d-2c65-432e-a66d-716f0351f0c4", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 35, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-108-90", ContainerID:"5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e", Pod:"goldmane-7fb6cdc5d9-gg2dd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califf1f0d34292", MAC:"9e:32:e1:62:19:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:35:48.748628 containerd[1559]: 2026-04-24 00:35:48.745 [INFO][4863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gg2dd" WorkloadEndpoint="172--236--108--90-k8s-goldmane--7fb6cdc5d9--gg2dd-eth0" Apr 24 00:35:48.786263 containerd[1559]: time="2026-04-24T00:35:48.786194269Z" level=info msg="connecting to shim 5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e" address="unix:///run/containerd/s/802345fb8bac7d63d2b5cec7116e829af126bf99a15af5bf1784c772e0bf1001" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:35:48.810679 systemd[1]: Started cri-containerd-d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69.scope - libcontainer container d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69. Apr 24 00:35:48.875674 systemd[1]: Started cri-containerd-5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e.scope - libcontainer container 5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e. Apr 24 00:35:48.928275 containerd[1559]: time="2026-04-24T00:35:48.927929452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cfb95b74-h2dr7,Uid:1b02a44c-0d48-44e4-9230-c06c8d011820,Namespace:calico-system,Attempt:0,} returns sandbox id \"d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69\"" Apr 24 00:35:48.952487 systemd-networkd[1444]: cali651600196c8: Gained IPv6LL Apr 24 00:35:48.953705 systemd-networkd[1444]: cali8fd9bf14d8d: Gained IPv6LL Apr 24 00:35:48.991607 containerd[1559]: time="2026-04-24T00:35:48.991552029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gg2dd,Uid:7bc7331d-2c65-432e-a66d-716f0351f0c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e\"" Apr 24 00:35:49.045373 containerd[1559]: time="2026-04-24T00:35:49.045173106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:49.046229 containerd[1559]: time="2026-04-24T00:35:49.046182203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 24 00:35:49.046807 containerd[1559]: time="2026-04-24T00:35:49.046769591Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:49.048583 containerd[1559]: time="2026-04-24T00:35:49.048534407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:49.049427 containerd[1559]: time="2026-04-24T00:35:49.049357015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 3.437845495s" Apr 24 00:35:49.049427 containerd[1559]: time="2026-04-24T00:35:49.049393164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 00:35:49.053455 containerd[1559]: time="2026-04-24T00:35:49.053190385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 00:35:49.057737 containerd[1559]: time="2026-04-24T00:35:49.057447103Z" level=info msg="CreateContainer within sandbox \"e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 00:35:49.062863 containerd[1559]: time="2026-04-24T00:35:49.062813620Z" level=info msg="Container 8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:49.083547 containerd[1559]: time="2026-04-24T00:35:49.083469555Z" level=info msg="CreateContainer within sandbox \"e9e721f7c32202d610d9499e9c0351b9b79cded9af29ede46a5b7dbf48e2dcfc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102\"" Apr 24 00:35:49.084588 containerd[1559]: time="2026-04-24T00:35:49.084550332Z" level=info msg="StartContainer for \"8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102\"" Apr 24 00:35:49.086709 containerd[1559]: time="2026-04-24T00:35:49.086668956Z" level=info msg="connecting to shim 8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102" address="unix:///run/containerd/s/81515ff8f65fcc5fab5d3c68165c7751eb01608bd2553f11259718ac32533ce7" protocol=ttrpc version=3 Apr 24 00:35:49.108456 systemd[1]: Started cri-containerd-8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102.scope - libcontainer container 8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102. Apr 24 00:35:49.169748 containerd[1559]: time="2026-04-24T00:35:49.169699327Z" level=info msg="StartContainer for \"8d03c049b0b454f73641dfeaefa768a17f5b8160b8be2c6128e6149647e07102\" returns successfully" Apr 24 00:35:49.225262 containerd[1559]: time="2026-04-24T00:35:49.224899733Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:49.225553 containerd[1559]: time="2026-04-24T00:35:49.225524721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 24 00:35:49.229218 containerd[1559]: time="2026-04-24T00:35:49.228806792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 174.48557ms" Apr 24 00:35:49.229218 containerd[1559]: time="2026-04-24T00:35:49.228841732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 00:35:49.230191 containerd[1559]: time="2026-04-24T00:35:49.229800939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 24 00:35:49.236086 containerd[1559]: time="2026-04-24T00:35:49.236054053Z" level=info msg="CreateContainer within sandbox \"07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 00:35:49.245174 containerd[1559]: time="2026-04-24T00:35:49.245140979Z" level=info msg="Container 8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:49.268365 containerd[1559]: time="2026-04-24T00:35:49.267275721Z" level=info msg="CreateContainer within sandbox \"07c30ebe84be5234c943c07622df544bb8c14181ba1d3e5d4509268e868875d4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31\"" Apr 24 00:35:49.269295 containerd[1559]: time="2026-04-24T00:35:49.269216146Z" level=info msg="StartContainer for \"8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31\"" Apr 24 00:35:49.270591 containerd[1559]: time="2026-04-24T00:35:49.270560472Z" level=info msg="connecting to shim 8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31" address="unix:///run/containerd/s/0a5669d335aa8ba82411a527eb732a19d3a6b703b015b265e97150ce4e162c02" protocol=ttrpc version=3 Apr 24 00:35:49.311417 systemd[1]: Started cri-containerd-8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31.scope - libcontainer container 8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31. Apr 24 00:35:49.395481 containerd[1559]: time="2026-04-24T00:35:49.395445443Z" level=info msg="StartContainer for \"8c375a56c83f1adc474640f28a81b1b47d08f70a08b3945ae000867595f38d31\" returns successfully" Apr 24 00:35:49.400458 systemd-networkd[1444]: cali35b3c6d7363: Gained IPv6LL Apr 24 00:35:49.518917 kubelet[2738]: E0424 00:35:49.518878 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:49.521666 kubelet[2738]: E0424 00:35:49.521576 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:49.535891 kubelet[2738]: I0424 00:35:49.535553 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7bd6c8766-8k6qz" podStartSLOduration=27.416325626 podStartE2EDuration="28.535541124s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:48.110415742 +0000 UTC m=+43.979591157" lastFinishedPulling="2026-04-24 00:35:49.22963124 +0000 UTC m=+45.098806655" observedRunningTime="2026-04-24 00:35:49.521173373 +0000 UTC m=+45.390348788" watchObservedRunningTime="2026-04-24 00:35:49.535541124 +0000 UTC m=+45.404716529" Apr 24 00:35:49.537725 kubelet[2738]: I0424 00:35:49.537358 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7bd6c8766-znzx7" podStartSLOduration=25.0946141 podStartE2EDuration="28.53734924s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:45.609850666 +0000 UTC m=+41.479026091" lastFinishedPulling="2026-04-24 00:35:49.052585806 +0000 UTC m=+44.921761231" observedRunningTime="2026-04-24 00:35:49.537117691 +0000 UTC m=+45.406293096" watchObservedRunningTime="2026-04-24 00:35:49.53734924 +0000 UTC m=+45.406524645" Apr 24 00:35:50.425130 systemd-networkd[1444]: califf1f0d34292: Gained IPv6LL Apr 24 00:35:50.523311 kubelet[2738]: I0424 00:35:50.522591 2738 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:35:50.523311 kubelet[2738]: E0424 00:35:50.522840 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:50.523311 kubelet[2738]: E0424 00:35:50.523222 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:50.681081 systemd-networkd[1444]: caliea3f8f7796b: Gained IPv6LL Apr 24 00:35:51.525883 kubelet[2738]: E0424 00:35:51.525825 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:35:52.898005 containerd[1559]: time="2026-04-24T00:35:52.897956124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:52.899079 containerd[1559]: time="2026-04-24T00:35:52.898908722Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 24 00:35:52.899682 containerd[1559]: time="2026-04-24T00:35:52.899663040Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:52.901159 containerd[1559]: time="2026-04-24T00:35:52.901138927Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:52.902137 containerd[1559]: time="2026-04-24T00:35:52.901768576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 3.671832866s" Apr 24 00:35:52.902137 containerd[1559]: time="2026-04-24T00:35:52.901795006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 24 00:35:52.909743 containerd[1559]: time="2026-04-24T00:35:52.909469870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 24 00:35:52.927777 containerd[1559]: time="2026-04-24T00:35:52.927731724Z" level=info msg="CreateContainer within sandbox \"d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 00:35:52.940783 containerd[1559]: time="2026-04-24T00:35:52.940570048Z" level=info msg="Container 7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:52.954590 containerd[1559]: time="2026-04-24T00:35:52.954558780Z" level=info msg="CreateContainer within sandbox \"d29c027bca9c9e1bd2afd29073cb21496a1f303b826952980468efe98160ba69\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72\"" Apr 24 00:35:52.955604 containerd[1559]: time="2026-04-24T00:35:52.955537018Z" level=info msg="StartContainer for \"7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72\"" Apr 24 00:35:52.956989 containerd[1559]: time="2026-04-24T00:35:52.956967556Z" level=info msg="connecting to shim 7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72" address="unix:///run/containerd/s/c9185a1a7428a70fdf5464524c41964f06de0b43c27ea288a6bb7e015f9958c9" protocol=ttrpc version=3 Apr 24 00:35:52.985444 systemd[1]: Started cri-containerd-7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72.scope - libcontainer container 7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72. Apr 24 00:35:53.051368 containerd[1559]: time="2026-04-24T00:35:53.051315207Z" level=info msg="StartContainer for \"7a58868aac97ca0f354df25c6345406672310037813efd49f132f22f33062c72\" returns successfully" Apr 24 00:35:53.560518 kubelet[2738]: I0424 00:35:53.559789 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85cfb95b74-h2dr7" podStartSLOduration=28.585960059 podStartE2EDuration="32.559774538s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:48.934790263 +0000 UTC m=+44.803965678" lastFinishedPulling="2026-04-24 00:35:52.908604752 +0000 UTC m=+48.777780157" observedRunningTime="2026-04-24 00:35:53.55860469 +0000 UTC m=+49.427780095" watchObservedRunningTime="2026-04-24 00:35:53.559774538 +0000 UTC m=+49.428949943" Apr 24 00:35:54.307609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount451231048.mount: Deactivated successfully. Apr 24 00:35:54.709272 containerd[1559]: time="2026-04-24T00:35:54.709211787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:54.710162 containerd[1559]: time="2026-04-24T00:35:54.710037615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 24 00:35:54.710881 containerd[1559]: time="2026-04-24T00:35:54.710840703Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:54.712478 containerd[1559]: time="2026-04-24T00:35:54.712407391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:35:54.713146 containerd[1559]: time="2026-04-24T00:35:54.713046431Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 1.803184111s" Apr 24 00:35:54.713146 containerd[1559]: time="2026-04-24T00:35:54.713072171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 24 00:35:54.728474 containerd[1559]: time="2026-04-24T00:35:54.728445415Z" level=info msg="CreateContainer within sandbox \"5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 00:35:54.738054 containerd[1559]: time="2026-04-24T00:35:54.737432600Z" level=info msg="Container 030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:35:54.743783 containerd[1559]: time="2026-04-24T00:35:54.743761570Z" level=info msg="CreateContainer within sandbox \"5f64c61d0bb1cce0060f832381afce0f20c35fcd1c982867a5d979a32c812a0e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136\"" Apr 24 00:35:54.744505 containerd[1559]: time="2026-04-24T00:35:54.744476099Z" level=info msg="StartContainer for \"030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136\"" Apr 24 00:35:54.746360 containerd[1559]: time="2026-04-24T00:35:54.746308016Z" level=info msg="connecting to shim 030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136" address="unix:///run/containerd/s/802345fb8bac7d63d2b5cec7116e829af126bf99a15af5bf1784c772e0bf1001" protocol=ttrpc version=3 Apr 24 00:35:54.768427 systemd[1]: Started cri-containerd-030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136.scope - libcontainer container 030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136. Apr 24 00:35:54.838020 containerd[1559]: time="2026-04-24T00:35:54.837954286Z" level=info msg="StartContainer for \"030d55eaa3e50fb7f7c81c59e2a5d54167c5d01033021b96dab095c6eda00136\" returns successfully" Apr 24 00:36:04.850312 kubelet[2738]: I0424 00:36:04.848371 2738 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:36:04.868490 kubelet[2738]: I0424 00:36:04.868437 2738 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7fb6cdc5d9-gg2dd" podStartSLOduration=38.143438899 podStartE2EDuration="43.868424848s" podCreationTimestamp="2026-04-24 00:35:21 +0000 UTC" firstStartedPulling="2026-04-24 00:35:48.994249411 +0000 UTC m=+44.863424816" lastFinishedPulling="2026-04-24 00:35:54.71923536 +0000 UTC m=+50.588410765" observedRunningTime="2026-04-24 00:35:55.559036892 +0000 UTC m=+51.428212297" watchObservedRunningTime="2026-04-24 00:36:04.868424848 +0000 UTC m=+60.737600253" Apr 24 00:36:31.270691 kubelet[2738]: E0424 00:36:31.270321 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:36:31.270691 kubelet[2738]: E0424 00:36:31.270321 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:36:31.270691 kubelet[2738]: E0424 00:36:31.270609 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:36:33.269818 kubelet[2738]: E0424 00:36:33.269757 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:36:59.270515 kubelet[2738]: E0424 00:36:59.270464 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:05.270403 kubelet[2738]: E0424 00:37:05.270366 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:17.270871 kubelet[2738]: E0424 00:37:17.270606 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:21.412134 systemd[1]: Started sshd@7-172.236.108.90:22-20.229.252.112:57484.service - OpenSSH per-connection server daemon (20.229.252.112:57484). Apr 24 00:37:21.941009 sshd[5614]: Accepted publickey for core from 20.229.252.112 port 57484 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:21.941929 sshd-session[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:21.947754 systemd-logind[1530]: New session 8 of user core. Apr 24 00:37:21.952432 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 00:37:22.367385 sshd[5617]: Connection closed by 20.229.252.112 port 57484 Apr 24 00:37:22.368823 sshd-session[5614]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:22.374727 systemd-logind[1530]: Session 8 logged out. Waiting for processes to exit. Apr 24 00:37:22.375425 systemd[1]: sshd@7-172.236.108.90:22-20.229.252.112:57484.service: Deactivated successfully. Apr 24 00:37:22.380067 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 00:37:22.383548 systemd-logind[1530]: Removed session 8. Apr 24 00:37:27.472666 systemd[1]: Started sshd@8-172.236.108.90:22-20.229.252.112:58804.service - OpenSSH per-connection server daemon (20.229.252.112:58804). Apr 24 00:37:27.997537 sshd[5654]: Accepted publickey for core from 20.229.252.112 port 58804 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:27.999339 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:28.004745 systemd-logind[1530]: New session 9 of user core. Apr 24 00:37:28.009604 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 00:37:28.349046 sshd[5681]: Connection closed by 20.229.252.112 port 58804 Apr 24 00:37:28.350510 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:28.357595 systemd[1]: sshd@8-172.236.108.90:22-20.229.252.112:58804.service: Deactivated successfully. Apr 24 00:37:28.360184 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 00:37:28.362517 systemd-logind[1530]: Session 9 logged out. Waiting for processes to exit. Apr 24 00:37:28.364924 systemd-logind[1530]: Removed session 9. Apr 24 00:37:33.270231 kubelet[2738]: E0424 00:37:33.270091 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:33.465496 systemd[1]: Started sshd@9-172.236.108.90:22-20.229.252.112:58812.service - OpenSSH per-connection server daemon (20.229.252.112:58812). Apr 24 00:37:34.008362 sshd[5698]: Accepted publickey for core from 20.229.252.112 port 58812 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:34.010131 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:34.015765 systemd-logind[1530]: New session 10 of user core. Apr 24 00:37:34.023442 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 00:37:34.378837 sshd[5701]: Connection closed by 20.229.252.112 port 58812 Apr 24 00:37:34.379621 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:34.384914 systemd-logind[1530]: Session 10 logged out. Waiting for processes to exit. Apr 24 00:37:34.385868 systemd[1]: sshd@9-172.236.108.90:22-20.229.252.112:58812.service: Deactivated successfully. Apr 24 00:37:34.389626 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 00:37:34.392081 systemd-logind[1530]: Removed session 10. Apr 24 00:37:34.488132 systemd[1]: Started sshd@10-172.236.108.90:22-20.229.252.112:58824.service - OpenSSH per-connection server daemon (20.229.252.112:58824). Apr 24 00:37:35.014173 sshd[5714]: Accepted publickey for core from 20.229.252.112 port 58824 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:35.018191 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:35.025927 systemd-logind[1530]: New session 11 of user core. Apr 24 00:37:35.034466 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 00:37:35.436775 sshd[5717]: Connection closed by 20.229.252.112 port 58824 Apr 24 00:37:35.438518 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:35.443778 systemd-logind[1530]: Session 11 logged out. Waiting for processes to exit. Apr 24 00:37:35.444545 systemd[1]: sshd@10-172.236.108.90:22-20.229.252.112:58824.service: Deactivated successfully. Apr 24 00:37:35.447185 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 00:37:35.450910 systemd-logind[1530]: Removed session 11. Apr 24 00:37:35.542945 systemd[1]: Started sshd@11-172.236.108.90:22-20.229.252.112:58826.service - OpenSSH per-connection server daemon (20.229.252.112:58826). Apr 24 00:37:36.069904 sshd[5727]: Accepted publickey for core from 20.229.252.112 port 58826 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:36.071480 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:36.077493 systemd-logind[1530]: New session 12 of user core. Apr 24 00:37:36.084413 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 00:37:36.271243 kubelet[2738]: E0424 00:37:36.271207 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:36.438868 sshd[5730]: Connection closed by 20.229.252.112 port 58826 Apr 24 00:37:36.441785 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:36.448270 systemd[1]: sshd@11-172.236.108.90:22-20.229.252.112:58826.service: Deactivated successfully. Apr 24 00:37:36.452534 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 00:37:36.457156 systemd-logind[1530]: Session 12 logged out. Waiting for processes to exit. Apr 24 00:37:36.459748 systemd-logind[1530]: Removed session 12. Apr 24 00:37:39.270379 kubelet[2738]: E0424 00:37:39.270209 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:41.551999 systemd[1]: Started sshd@12-172.236.108.90:22-20.229.252.112:56776.service - OpenSSH per-connection server daemon (20.229.252.112:56776). Apr 24 00:37:42.107799 sshd[5766]: Accepted publickey for core from 20.229.252.112 port 56776 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:42.111196 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:42.120747 systemd-logind[1530]: New session 13 of user core. Apr 24 00:37:42.127528 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 00:37:42.486032 sshd[5771]: Connection closed by 20.229.252.112 port 56776 Apr 24 00:37:42.486648 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:42.491264 systemd-logind[1530]: Session 13 logged out. Waiting for processes to exit. Apr 24 00:37:42.491714 systemd[1]: sshd@12-172.236.108.90:22-20.229.252.112:56776.service: Deactivated successfully. Apr 24 00:37:42.494047 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 00:37:42.496435 systemd-logind[1530]: Removed session 13. Apr 24 00:37:42.593890 systemd[1]: Started sshd@13-172.236.108.90:22-20.229.252.112:56782.service - OpenSSH per-connection server daemon (20.229.252.112:56782). Apr 24 00:37:43.116332 sshd[5782]: Accepted publickey for core from 20.229.252.112 port 56782 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:43.117759 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:43.126422 systemd-logind[1530]: New session 14 of user core. Apr 24 00:37:43.133500 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 00:37:43.737455 sshd[5785]: Connection closed by 20.229.252.112 port 56782 Apr 24 00:37:43.738572 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:43.743476 systemd[1]: sshd@13-172.236.108.90:22-20.229.252.112:56782.service: Deactivated successfully. Apr 24 00:37:43.747365 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 00:37:43.749124 systemd-logind[1530]: Session 14 logged out. Waiting for processes to exit. Apr 24 00:37:43.751147 systemd-logind[1530]: Removed session 14. Apr 24 00:37:43.844513 systemd[1]: Started sshd@14-172.236.108.90:22-20.229.252.112:56784.service - OpenSSH per-connection server daemon (20.229.252.112:56784). Apr 24 00:37:44.380337 sshd[5795]: Accepted publickey for core from 20.229.252.112 port 56784 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:44.382403 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:44.390070 systemd-logind[1530]: New session 15 of user core. Apr 24 00:37:44.398639 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 00:37:45.309179 sshd[5798]: Connection closed by 20.229.252.112 port 56784 Apr 24 00:37:45.309850 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:45.315807 systemd[1]: sshd@14-172.236.108.90:22-20.229.252.112:56784.service: Deactivated successfully. Apr 24 00:37:45.318858 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 00:37:45.320547 systemd-logind[1530]: Session 15 logged out. Waiting for processes to exit. Apr 24 00:37:45.322310 systemd-logind[1530]: Removed session 15. Apr 24 00:37:45.420987 systemd[1]: Started sshd@15-172.236.108.90:22-20.229.252.112:56792.service - OpenSSH per-connection server daemon (20.229.252.112:56792). Apr 24 00:37:45.962326 sshd[5813]: Accepted publickey for core from 20.229.252.112 port 56792 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:45.963765 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:45.970263 systemd-logind[1530]: New session 16 of user core. Apr 24 00:37:45.976419 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 00:37:46.444097 sshd[5816]: Connection closed by 20.229.252.112 port 56792 Apr 24 00:37:46.445467 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:46.449942 systemd[1]: sshd@15-172.236.108.90:22-20.229.252.112:56792.service: Deactivated successfully. Apr 24 00:37:46.452940 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 00:37:46.454498 systemd-logind[1530]: Session 16 logged out. Waiting for processes to exit. Apr 24 00:37:46.456070 systemd-logind[1530]: Removed session 16. Apr 24 00:37:46.557813 systemd[1]: Started sshd@16-172.236.108.90:22-20.229.252.112:38590.service - OpenSSH per-connection server daemon (20.229.252.112:38590). Apr 24 00:37:47.108574 sshd[5828]: Accepted publickey for core from 20.229.252.112 port 38590 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:47.110134 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:47.119746 systemd-logind[1530]: New session 17 of user core. Apr 24 00:37:47.125471 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 00:37:47.580427 sshd[5831]: Connection closed by 20.229.252.112 port 38590 Apr 24 00:37:47.582455 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:47.588090 systemd[1]: sshd@16-172.236.108.90:22-20.229.252.112:38590.service: Deactivated successfully. Apr 24 00:37:47.592321 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 00:37:47.594269 systemd-logind[1530]: Session 17 logged out. Waiting for processes to exit. Apr 24 00:37:47.598079 systemd-logind[1530]: Removed session 17. Apr 24 00:37:51.270243 kubelet[2738]: E0424 00:37:51.270208 2738 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:37:52.697502 systemd[1]: Started sshd@17-172.236.108.90:22-20.229.252.112:38606.service - OpenSSH per-connection server daemon (20.229.252.112:38606). Apr 24 00:37:53.250420 sshd[5846]: Accepted publickey for core from 20.229.252.112 port 38606 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:53.252439 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:53.258386 systemd-logind[1530]: New session 18 of user core. Apr 24 00:37:53.262433 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 00:37:53.620774 sshd[5849]: Connection closed by 20.229.252.112 port 38606 Apr 24 00:37:53.622043 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:53.626267 systemd-logind[1530]: Session 18 logged out. Waiting for processes to exit. Apr 24 00:37:53.627070 systemd[1]: sshd@17-172.236.108.90:22-20.229.252.112:38606.service: Deactivated successfully. Apr 24 00:37:53.629197 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 00:37:53.632430 systemd-logind[1530]: Removed session 18. Apr 24 00:37:58.728724 systemd[1]: Started sshd@18-172.236.108.90:22-20.229.252.112:41154.service - OpenSSH per-connection server daemon (20.229.252.112:41154). Apr 24 00:37:59.255906 sshd[5904]: Accepted publickey for core from 20.229.252.112 port 41154 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:37:59.257243 sshd-session[5904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:37:59.266985 systemd-logind[1530]: New session 19 of user core. Apr 24 00:37:59.270497 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 00:37:59.612340 sshd[5907]: Connection closed by 20.229.252.112 port 41154 Apr 24 00:37:59.614365 sshd-session[5904]: pam_unix(sshd:session): session closed for user core Apr 24 00:37:59.619349 systemd-logind[1530]: Session 19 logged out. Waiting for processes to exit. Apr 24 00:37:59.619794 systemd[1]: sshd@18-172.236.108.90:22-20.229.252.112:41154.service: Deactivated successfully. Apr 24 00:37:59.621796 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 00:37:59.623986 systemd-logind[1530]: Removed session 19.