Apr 24 00:14:49.933740 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Apr 23 22:08:58 -00 2026 Apr 24 00:14:49.933763 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:14:49.933772 kernel: BIOS-provided physical RAM map: Apr 24 00:14:49.933778 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Apr 24 00:14:49.933784 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Apr 24 00:14:49.933790 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 24 00:14:49.933798 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Apr 24 00:14:49.933805 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Apr 24 00:14:49.933811 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 24 00:14:49.933816 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 24 00:14:49.933822 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 24 00:14:49.933828 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 24 00:14:49.933834 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Apr 24 00:14:49.933873 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Apr 24 00:14:49.933884 kernel: NX (Execute Disable) protection: active Apr 24 00:14:49.933891 kernel: APIC: Static calls initialized Apr 24 00:14:49.933897 kernel: SMBIOS 2.8 present. Apr 24 00:14:49.933904 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Apr 24 00:14:49.933910 kernel: DMI: Memory slots populated: 1/1 Apr 24 00:14:49.933916 kernel: Hypervisor detected: KVM Apr 24 00:14:49.933925 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 00:14:49.933931 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 24 00:14:49.933937 kernel: kvm-clock: using sched offset of 7423253898 cycles Apr 24 00:14:49.933944 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 24 00:14:49.933951 kernel: tsc: Detected 1999.998 MHz processor Apr 24 00:14:49.933958 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 24 00:14:49.933964 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 24 00:14:49.933971 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Apr 24 00:14:49.933978 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 24 00:14:49.933984 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 24 00:14:49.933993 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Apr 24 00:14:49.933999 kernel: Using GB pages for direct mapping Apr 24 00:14:49.934006 kernel: ACPI: Early table checksum verification disabled Apr 24 00:14:49.934012 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Apr 24 00:14:49.934019 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934025 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934032 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934038 kernel: ACPI: FACS 0x000000007FFE0000 000040 Apr 24 00:14:49.934045 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934053 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934063 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934070 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 24 00:14:49.934077 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Apr 24 00:14:49.934084 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Apr 24 00:14:49.934093 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Apr 24 00:14:49.934099 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Apr 24 00:14:49.934106 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Apr 24 00:14:49.934113 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Apr 24 00:14:49.934120 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Apr 24 00:14:49.934126 kernel: No NUMA configuration found Apr 24 00:14:49.934133 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Apr 24 00:14:49.934140 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Apr 24 00:14:49.934147 kernel: Zone ranges: Apr 24 00:14:49.934155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 24 00:14:49.934162 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 24 00:14:49.934169 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Apr 24 00:14:49.934175 kernel: Device empty Apr 24 00:14:49.934182 kernel: Movable zone start for each node Apr 24 00:14:49.934189 kernel: Early memory node ranges Apr 24 00:14:49.934196 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 24 00:14:49.934202 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Apr 24 00:14:49.934209 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Apr 24 00:14:49.934216 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Apr 24 00:14:49.934225 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 24 00:14:49.934232 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 24 00:14:49.934238 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Apr 24 00:14:49.934245 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 24 00:14:49.934252 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 24 00:14:49.934259 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 24 00:14:49.934265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 24 00:14:49.934272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 24 00:14:49.934279 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 24 00:14:49.934289 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 24 00:14:49.934297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 24 00:14:49.934303 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 24 00:14:49.934310 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 24 00:14:49.934317 kernel: TSC deadline timer available Apr 24 00:14:49.934323 kernel: CPU topo: Max. logical packages: 1 Apr 24 00:14:49.934330 kernel: CPU topo: Max. logical dies: 1 Apr 24 00:14:49.934337 kernel: CPU topo: Max. dies per package: 1 Apr 24 00:14:49.934343 kernel: CPU topo: Max. threads per core: 1 Apr 24 00:14:49.934352 kernel: CPU topo: Num. cores per package: 2 Apr 24 00:14:49.934359 kernel: CPU topo: Num. threads per package: 2 Apr 24 00:14:49.934366 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Apr 24 00:14:49.934372 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 24 00:14:49.934379 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 24 00:14:49.934386 kernel: kvm-guest: setup PV sched yield Apr 24 00:14:49.934393 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 24 00:14:49.934399 kernel: Booting paravirtualized kernel on KVM Apr 24 00:14:49.934406 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 24 00:14:49.934415 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 24 00:14:49.934422 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u1048576 Apr 24 00:14:49.934429 kernel: pcpu-alloc: s207448 r8192 d30120 u1048576 alloc=1*2097152 Apr 24 00:14:49.934435 kernel: pcpu-alloc: [0] 0 1 Apr 24 00:14:49.934442 kernel: kvm-guest: PV spinlocks enabled Apr 24 00:14:49.934449 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 24 00:14:49.934456 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:14:49.934463 kernel: random: crng init done Apr 24 00:14:49.934472 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 24 00:14:49.934479 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 24 00:14:49.934486 kernel: Fallback order for Node 0: 0 Apr 24 00:14:49.934493 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Apr 24 00:14:49.934499 kernel: Policy zone: Normal Apr 24 00:14:49.934506 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 24 00:14:49.934513 kernel: software IO TLB: area num 2. Apr 24 00:14:49.934520 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 24 00:14:49.934526 kernel: ftrace: allocating 40126 entries in 157 pages Apr 24 00:14:49.934535 kernel: ftrace: allocated 157 pages with 5 groups Apr 24 00:14:49.934541 kernel: Dynamic Preempt: voluntary Apr 24 00:14:49.934548 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 24 00:14:49.934555 kernel: rcu: RCU event tracing is enabled. Apr 24 00:14:49.934562 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 24 00:14:49.934569 kernel: Trampoline variant of Tasks RCU enabled. Apr 24 00:14:49.934575 kernel: Rude variant of Tasks RCU enabled. Apr 24 00:14:49.934582 kernel: Tracing variant of Tasks RCU enabled. Apr 24 00:14:49.934589 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 24 00:14:49.934595 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 24 00:14:49.934793 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:14:49.934806 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:14:49.934815 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 24 00:14:49.934822 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 24 00:14:49.934829 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 24 00:14:49.934861 kernel: Console: colour VGA+ 80x25 Apr 24 00:14:49.934873 kernel: printk: legacy console [tty0] enabled Apr 24 00:14:49.935888 kernel: printk: legacy console [ttyS0] enabled Apr 24 00:14:49.935902 kernel: ACPI: Core revision 20240827 Apr 24 00:14:49.935913 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 24 00:14:49.935921 kernel: APIC: Switch to symmetric I/O mode setup Apr 24 00:14:49.935928 kernel: x2apic enabled Apr 24 00:14:49.935935 kernel: APIC: Switched APIC routing to: physical x2apic Apr 24 00:14:49.935942 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 24 00:14:49.935949 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 24 00:14:49.935956 kernel: kvm-guest: setup PV IPIs Apr 24 00:14:49.935965 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 24 00:14:49.935973 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Apr 24 00:14:49.935980 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999998) Apr 24 00:14:49.935987 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 24 00:14:49.935994 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Apr 24 00:14:49.936001 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Apr 24 00:14:49.936008 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 24 00:14:49.936015 kernel: Spectre V2 : Mitigation: Retpolines Apr 24 00:14:49.936022 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 24 00:14:49.936031 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 24 00:14:49.936038 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 24 00:14:49.936045 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 24 00:14:49.936052 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Apr 24 00:14:49.936060 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Apr 24 00:14:49.936067 kernel: active return thunk: srso_alias_return_thunk Apr 24 00:14:49.936074 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Apr 24 00:14:49.936081 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Apr 24 00:14:49.936090 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Apr 24 00:14:49.936097 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 24 00:14:49.936104 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 24 00:14:49.936111 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 24 00:14:49.936118 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 24 00:14:49.936125 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 24 00:14:49.936132 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Apr 24 00:14:49.936139 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Apr 24 00:14:49.936146 kernel: Freeing SMP alternatives memory: 32K Apr 24 00:14:49.936155 kernel: pid_max: default: 32768 minimum: 301 Apr 24 00:14:49.936162 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 24 00:14:49.936169 kernel: landlock: Up and running. Apr 24 00:14:49.936175 kernel: SELinux: Initializing. Apr 24 00:14:49.936182 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:14:49.936190 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 24 00:14:49.936197 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Apr 24 00:14:49.936204 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Apr 24 00:14:49.936211 kernel: ... version: 0 Apr 24 00:14:49.936219 kernel: ... bit width: 48 Apr 24 00:14:49.936226 kernel: ... generic registers: 6 Apr 24 00:14:49.936233 kernel: ... value mask: 0000ffffffffffff Apr 24 00:14:49.936240 kernel: ... max period: 00007fffffffffff Apr 24 00:14:49.936247 kernel: ... fixed-purpose events: 0 Apr 24 00:14:49.936254 kernel: ... event mask: 000000000000003f Apr 24 00:14:49.936261 kernel: signal: max sigframe size: 3376 Apr 24 00:14:49.936268 kernel: rcu: Hierarchical SRCU implementation. Apr 24 00:14:49.936275 kernel: rcu: Max phase no-delay instances is 400. Apr 24 00:14:49.936284 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 24 00:14:49.936291 kernel: smp: Bringing up secondary CPUs ... Apr 24 00:14:49.936298 kernel: smpboot: x86: Booting SMP configuration: Apr 24 00:14:49.936305 kernel: .... node #0, CPUs: #1 Apr 24 00:14:49.936312 kernel: smp: Brought up 1 node, 2 CPUs Apr 24 00:14:49.936319 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Apr 24 00:14:49.936326 kernel: Memory: 3953608K/4193772K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 235480K reserved, 0K cma-reserved) Apr 24 00:14:49.936333 kernel: devtmpfs: initialized Apr 24 00:14:49.936340 kernel: x86/mm: Memory block size: 128MB Apr 24 00:14:49.936349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 24 00:14:49.936356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 24 00:14:49.936363 kernel: pinctrl core: initialized pinctrl subsystem Apr 24 00:14:49.936370 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 24 00:14:49.936377 kernel: audit: initializing netlink subsys (disabled) Apr 24 00:14:49.936384 kernel: audit: type=2000 audit(1776989687.065:1): state=initialized audit_enabled=0 res=1 Apr 24 00:14:49.936391 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 24 00:14:49.936398 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 24 00:14:49.936405 kernel: cpuidle: using governor menu Apr 24 00:14:49.936414 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 24 00:14:49.936421 kernel: dca service started, version 1.12.1 Apr 24 00:14:49.936428 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 24 00:14:49.936435 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 24 00:14:49.936442 kernel: PCI: Using configuration type 1 for base access Apr 24 00:14:49.936449 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 24 00:14:49.936456 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 24 00:14:49.936463 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 24 00:14:49.936470 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 24 00:14:49.936479 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 24 00:14:49.936485 kernel: ACPI: Added _OSI(Module Device) Apr 24 00:14:49.936492 kernel: ACPI: Added _OSI(Processor Device) Apr 24 00:14:49.936499 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 24 00:14:49.936506 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 24 00:14:49.936513 kernel: ACPI: Interpreter enabled Apr 24 00:14:49.936520 kernel: ACPI: PM: (supports S0 S3 S5) Apr 24 00:14:49.936527 kernel: ACPI: Using IOAPIC for interrupt routing Apr 24 00:14:49.936534 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 24 00:14:49.936543 kernel: PCI: Using E820 reservations for host bridge windows Apr 24 00:14:49.936550 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 24 00:14:49.936557 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 24 00:14:49.937862 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 24 00:14:49.938033 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 24 00:14:49.938161 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 24 00:14:49.938171 kernel: PCI host bridge to bus 0000:00 Apr 24 00:14:49.938296 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 24 00:14:49.938415 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 24 00:14:49.938526 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 24 00:14:49.938779 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 24 00:14:49.940514 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 24 00:14:49.940638 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Apr 24 00:14:49.940927 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 24 00:14:49.941104 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 24 00:14:49.941245 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 24 00:14:49.941368 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 24 00:14:49.941489 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 24 00:14:49.941609 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 24 00:14:49.941728 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 24 00:14:49.942103 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Apr 24 00:14:49.942234 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Apr 24 00:14:49.942355 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 24 00:14:49.942475 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 24 00:14:49.942788 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 24 00:14:49.942926 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Apr 24 00:14:49.943048 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 24 00:14:49.943173 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 24 00:14:49.943293 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 24 00:14:49.943425 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 24 00:14:49.943546 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 24 00:14:49.943673 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 24 00:14:49.944090 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Apr 24 00:14:49.944215 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Apr 24 00:14:49.944350 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 24 00:14:49.944471 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 24 00:14:49.944480 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 24 00:14:49.944488 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 24 00:14:49.944495 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 24 00:14:49.944502 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 24 00:14:49.944510 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 24 00:14:49.944517 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 24 00:14:49.944527 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 24 00:14:49.944534 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 24 00:14:49.944541 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 24 00:14:49.944548 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 24 00:14:49.944555 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 24 00:14:49.944562 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 24 00:14:49.944569 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 24 00:14:49.944577 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 24 00:14:49.944584 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 24 00:14:49.944593 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 24 00:14:49.944600 kernel: iommu: Default domain type: Translated Apr 24 00:14:49.944607 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 24 00:14:49.944614 kernel: PCI: Using ACPI for IRQ routing Apr 24 00:14:49.944621 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 24 00:14:49.944628 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Apr 24 00:14:49.944635 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Apr 24 00:14:49.944769 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 24 00:14:49.946898 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 24 00:14:49.947033 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 24 00:14:49.947044 kernel: vgaarb: loaded Apr 24 00:14:49.947052 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 24 00:14:49.947059 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 24 00:14:49.947066 kernel: clocksource: Switched to clocksource kvm-clock Apr 24 00:14:49.947073 kernel: VFS: Disk quotas dquot_6.6.0 Apr 24 00:14:49.947080 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 24 00:14:49.947087 kernel: pnp: PnP ACPI init Apr 24 00:14:49.947228 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 24 00:14:49.947239 kernel: pnp: PnP ACPI: found 5 devices Apr 24 00:14:49.947246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 24 00:14:49.947253 kernel: NET: Registered PF_INET protocol family Apr 24 00:14:49.947261 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 24 00:14:49.947268 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 24 00:14:49.947275 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 24 00:14:49.947282 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 24 00:14:49.947292 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 24 00:14:49.947299 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 24 00:14:49.947306 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:14:49.947313 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 24 00:14:49.947320 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 24 00:14:49.947327 kernel: NET: Registered PF_XDP protocol family Apr 24 00:14:49.947440 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 24 00:14:49.947552 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 24 00:14:49.947864 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 24 00:14:49.947986 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 24 00:14:49.948096 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 24 00:14:49.948206 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Apr 24 00:14:49.948216 kernel: PCI: CLS 0 bytes, default 64 Apr 24 00:14:49.948223 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 24 00:14:49.948230 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Apr 24 00:14:49.948237 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a8595ce59, max_idle_ns: 881590778713 ns Apr 24 00:14:49.948244 kernel: Initialise system trusted keyrings Apr 24 00:14:49.948254 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 24 00:14:49.948261 kernel: Key type asymmetric registered Apr 24 00:14:49.948268 kernel: Asymmetric key parser 'x509' registered Apr 24 00:14:49.948275 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 24 00:14:49.948282 kernel: io scheduler mq-deadline registered Apr 24 00:14:49.948289 kernel: io scheduler kyber registered Apr 24 00:14:49.948296 kernel: io scheduler bfq registered Apr 24 00:14:49.948303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 24 00:14:49.948310 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 24 00:14:49.948320 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 24 00:14:49.948327 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 24 00:14:49.948334 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 24 00:14:49.948341 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 24 00:14:49.948348 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 24 00:14:49.948355 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 24 00:14:49.948362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 24 00:14:49.948493 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 24 00:14:49.948611 kernel: rtc_cmos 00:03: registered as rtc0 Apr 24 00:14:49.948729 kernel: rtc_cmos 00:03: setting system clock to 2026-04-24T00:14:49 UTC (1776989689) Apr 24 00:14:49.950728 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 24 00:14:49.950743 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Apr 24 00:14:49.950751 kernel: NET: Registered PF_INET6 protocol family Apr 24 00:14:49.950759 kernel: Segment Routing with IPv6 Apr 24 00:14:49.950766 kernel: In-situ OAM (IOAM) with IPv6 Apr 24 00:14:49.950773 kernel: NET: Registered PF_PACKET protocol family Apr 24 00:14:49.950780 kernel: Key type dns_resolver registered Apr 24 00:14:49.950791 kernel: IPI shorthand broadcast: enabled Apr 24 00:14:49.950798 kernel: sched_clock: Marking stable (3038003679, 342309983)->(3479588949, -99275287) Apr 24 00:14:49.950805 kernel: registered taskstats version 1 Apr 24 00:14:49.950813 kernel: Loading compiled-in X.509 certificates Apr 24 00:14:49.950820 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 09f9b319c99eb3f54e68ef799fdb2bce5b238ec0' Apr 24 00:14:49.950827 kernel: Demotion targets for Node 0: null Apr 24 00:14:49.950834 kernel: Key type .fscrypt registered Apr 24 00:14:49.950841 kernel: Key type fscrypt-provisioning registered Apr 24 00:14:49.950903 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 24 00:14:49.950913 kernel: ima: Allocated hash algorithm: sha1 Apr 24 00:14:49.950920 kernel: ima: No architecture policies found Apr 24 00:14:49.950927 kernel: clk: Disabling unused clocks Apr 24 00:14:49.950934 kernel: Warning: unable to open an initial console. Apr 24 00:14:49.950942 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 24 00:14:49.950949 kernel: Write protecting the kernel read-only data: 40960k Apr 24 00:14:49.950956 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 24 00:14:49.950963 kernel: Run /init as init process Apr 24 00:14:49.950971 kernel: with arguments: Apr 24 00:14:49.950980 kernel: /init Apr 24 00:14:49.950987 kernel: with environment: Apr 24 00:14:49.951007 kernel: HOME=/ Apr 24 00:14:49.951016 kernel: TERM=linux Apr 24 00:14:49.951025 systemd[1]: Successfully made /usr/ read-only. Apr 24 00:14:49.951035 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:14:49.951043 systemd[1]: Detected virtualization kvm. Apr 24 00:14:49.951053 systemd[1]: Detected architecture x86-64. Apr 24 00:14:49.951061 systemd[1]: Running in initrd. Apr 24 00:14:49.951068 systemd[1]: No hostname configured, using default hostname. Apr 24 00:14:49.951076 systemd[1]: Hostname set to . Apr 24 00:14:49.951084 systemd[1]: Initializing machine ID from random generator. Apr 24 00:14:49.951092 systemd[1]: Queued start job for default target initrd.target. Apr 24 00:14:49.951100 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:14:49.951108 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:14:49.951118 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 24 00:14:49.951126 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:14:49.951134 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 24 00:14:49.951143 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 24 00:14:49.951152 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 24 00:14:49.951160 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 24 00:14:49.951168 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:14:49.951178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:14:49.951185 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:14:49.951193 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:14:49.951201 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:14:49.951209 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:14:49.951217 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:14:49.951224 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:14:49.951232 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 24 00:14:49.951240 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 24 00:14:49.951250 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:14:49.951258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:14:49.951268 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:14:49.951276 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:14:49.951284 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 24 00:14:49.951293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:14:49.951302 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 24 00:14:49.951310 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 24 00:14:49.951318 systemd[1]: Starting systemd-fsck-usr.service... Apr 24 00:14:49.951325 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:14:49.951333 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:14:49.951341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:14:49.951371 systemd-journald[187]: Collecting audit messages is disabled. Apr 24 00:14:49.951394 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 24 00:14:49.951403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:14:49.951413 systemd[1]: Finished systemd-fsck-usr.service. Apr 24 00:14:49.951421 systemd-journald[187]: Journal started Apr 24 00:14:49.951438 systemd-journald[187]: Runtime Journal (/run/log/journal/5dac2b1785de46cebec41ad54a5471ff) is 8M, max 78.2M, 70.2M free. Apr 24 00:14:49.933692 systemd-modules-load[188]: Inserted module 'overlay' Apr 24 00:14:49.959017 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:14:49.968872 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 24 00:14:49.970927 kernel: Bridge firewalling registered Apr 24 00:14:49.970387 systemd-modules-load[188]: Inserted module 'br_netfilter' Apr 24 00:14:50.077547 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:14:50.079605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:14:50.083415 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 24 00:14:50.086967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:14:50.102687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 24 00:14:50.106941 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:14:50.116438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:14:50.123431 systemd-tmpfiles[206]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 24 00:14:50.127673 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 24 00:14:50.131495 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:14:50.136240 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:14:50.139985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:14:50.142729 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 24 00:14:50.146699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:14:50.151388 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:14:50.167544 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=35bf60e399c7fbdab9d27e362bd719e7cadd795a3fa26a4f30de01ccc70fba7e Apr 24 00:14:50.185667 systemd-resolved[225]: Positive Trust Anchors: Apr 24 00:14:50.186515 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:14:50.186543 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:14:50.193113 systemd-resolved[225]: Defaulting to hostname 'linux'. Apr 24 00:14:50.194343 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:14:50.195660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:14:50.278895 kernel: SCSI subsystem initialized Apr 24 00:14:50.288873 kernel: Loading iSCSI transport class v2.0-870. Apr 24 00:14:50.302875 kernel: iscsi: registered transport (tcp) Apr 24 00:14:50.325074 kernel: iscsi: registered transport (qla4xxx) Apr 24 00:14:50.325125 kernel: QLogic iSCSI HBA Driver Apr 24 00:14:50.347284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:14:50.362172 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:14:50.366006 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:14:50.415314 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 24 00:14:50.417970 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 24 00:14:50.481871 kernel: raid6: avx2x4 gen() 29515 MB/s Apr 24 00:14:50.490864 kernel: raid6: avx2x2 gen() 28995 MB/s Apr 24 00:14:50.508957 kernel: raid6: avx2x1 gen() 21051 MB/s Apr 24 00:14:50.508988 kernel: raid6: using algorithm avx2x4 gen() 29515 MB/s Apr 24 00:14:50.529175 kernel: raid6: .... xor() 4790 MB/s, rmw enabled Apr 24 00:14:50.529192 kernel: raid6: using avx2x2 recovery algorithm Apr 24 00:14:50.552982 kernel: xor: automatically using best checksumming function avx Apr 24 00:14:50.687892 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 24 00:14:50.695294 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:14:50.697662 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:14:50.725259 systemd-udevd[436]: Using default interface naming scheme 'v255'. Apr 24 00:14:50.732212 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:14:50.735786 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 24 00:14:50.763235 dracut-pre-trigger[442]: rd.md=0: removing MD RAID activation Apr 24 00:14:50.793380 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:14:50.796791 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:14:50.880883 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:14:50.887231 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 24 00:14:50.978310 kernel: cryptd: max_cpu_qlen set to 1000 Apr 24 00:14:50.982901 kernel: libata version 3.00 loaded. Apr 24 00:14:50.990671 kernel: ahci 0000:00:1f.2: version 3.0 Apr 24 00:14:50.990891 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 24 00:14:50.997391 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 24 00:14:50.997566 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 24 00:14:50.997717 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 24 00:14:51.004389 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Apr 24 00:14:51.006324 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:14:51.014524 kernel: scsi host1: ahci Apr 24 00:14:51.015154 kernel: scsi host2: ahci Apr 24 00:14:51.008243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:14:51.033085 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 24 00:14:51.033103 kernel: scsi host0: Virtio SCSI HBA Apr 24 00:14:51.033272 kernel: AES CTR mode by8 optimization enabled Apr 24 00:14:51.033285 kernel: scsi host3: ahci Apr 24 00:14:51.033438 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Apr 24 00:14:51.033462 kernel: scsi host4: ahci Apr 24 00:14:51.016194 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:14:51.215003 kernel: scsi host5: ahci Apr 24 00:14:51.215217 kernel: scsi host6: ahci Apr 24 00:14:51.198064 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:14:51.203019 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:14:51.271060 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 24 lpm-pol 1 Apr 24 00:14:51.271089 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 24 lpm-pol 1 Apr 24 00:14:51.271109 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 24 lpm-pol 1 Apr 24 00:14:51.271122 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 24 lpm-pol 1 Apr 24 00:14:51.271134 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 24 lpm-pol 1 Apr 24 00:14:51.271146 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 24 lpm-pol 1 Apr 24 00:14:51.290756 kernel: sd 0:0:0:0: Power-on or device reset occurred Apr 24 00:14:51.291014 kernel: sd 0:0:0:0: [sda] 167739392 512-byte logical blocks: (85.9 GB/80.0 GiB) Apr 24 00:14:51.293881 kernel: sd 0:0:0:0: [sda] Write Protect is off Apr 24 00:14:51.294079 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Apr 24 00:14:51.295841 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 24 00:14:51.310882 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 24 00:14:51.310907 kernel: GPT:9289727 != 167739391 Apr 24 00:14:51.310920 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 24 00:14:51.310930 kernel: GPT:9289727 != 167739391 Apr 24 00:14:51.310941 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 24 00:14:51.310951 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:14:51.313605 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Apr 24 00:14:51.425240 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:14:51.585632 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 24 00:14:51.585695 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 24 00:14:51.585709 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 24 00:14:51.588884 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 24 00:14:51.588924 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 24 00:14:51.591865 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 24 00:14:51.662388 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Apr 24 00:14:51.678371 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Apr 24 00:14:51.686353 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Apr 24 00:14:51.687137 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Apr 24 00:14:51.689250 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 24 00:14:51.699351 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 00:14:51.701588 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:14:51.702400 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:14:51.704209 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:14:51.706698 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 24 00:14:51.710940 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 24 00:14:51.722416 disk-uuid[611]: Primary Header is updated. Apr 24 00:14:51.722416 disk-uuid[611]: Secondary Entries is updated. Apr 24 00:14:51.722416 disk-uuid[611]: Secondary Header is updated. Apr 24 00:14:51.731910 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:14:51.733509 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:14:52.749430 disk-uuid[612]: The operation has completed successfully. Apr 24 00:14:52.751131 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 24 00:14:52.804109 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 24 00:14:52.804241 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 24 00:14:52.834203 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 24 00:14:52.849135 sh[633]: Success Apr 24 00:14:52.871113 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 24 00:14:52.871154 kernel: device-mapper: uevent: version 1.0.3 Apr 24 00:14:52.871867 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 24 00:14:52.886919 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 24 00:14:52.927785 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 24 00:14:52.933612 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 24 00:14:52.942815 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 24 00:14:52.954870 kernel: BTRFS: device fsid b0afcb9a-4dc6-42cc-b61f-b370046a03ca devid 1 transid 32 /dev/mapper/usr (254:0) scanned by mount (646) Apr 24 00:14:52.959105 kernel: BTRFS info (device dm-0): first mount of filesystem b0afcb9a-4dc6-42cc-b61f-b370046a03ca Apr 24 00:14:52.959132 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:14:52.971708 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Apr 24 00:14:52.971733 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 24 00:14:52.971747 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 24 00:14:52.975435 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 24 00:14:52.976510 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:14:52.977658 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 24 00:14:52.978364 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 24 00:14:52.992952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 24 00:14:53.010869 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (669) Apr 24 00:14:53.014870 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:14:53.017869 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:14:53.026271 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:14:53.026296 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:14:53.026311 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:14:53.034860 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:14:53.035640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 24 00:14:53.037366 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 24 00:14:53.130023 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:14:53.135170 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:14:53.156998 ignition[726]: Ignition 2.22.0 Apr 24 00:14:53.157012 ignition[726]: Stage: fetch-offline Apr 24 00:14:53.157047 ignition[726]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:53.157057 ignition[726]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:53.157138 ignition[726]: parsed url from cmdline: "" Apr 24 00:14:53.160666 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:14:53.157142 ignition[726]: no config URL provided Apr 24 00:14:53.157148 ignition[726]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:14:53.157157 ignition[726]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:14:53.157162 ignition[726]: failed to fetch config: resource requires networking Apr 24 00:14:53.157310 ignition[726]: Ignition finished successfully Apr 24 00:14:53.186401 systemd-networkd[820]: lo: Link UP Apr 24 00:14:53.186414 systemd-networkd[820]: lo: Gained carrier Apr 24 00:14:53.188461 systemd-networkd[820]: Enumeration completed Apr 24 00:14:53.189090 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:14:53.189094 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:14:53.190247 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:14:53.191157 systemd-networkd[820]: eth0: Link UP Apr 24 00:14:53.191307 systemd-networkd[820]: eth0: Gained carrier Apr 24 00:14:53.191317 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:14:53.193892 systemd[1]: Reached target network.target - Network. Apr 24 00:14:53.196798 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 24 00:14:53.224308 ignition[825]: Ignition 2.22.0 Apr 24 00:14:53.224323 ignition[825]: Stage: fetch Apr 24 00:14:53.224445 ignition[825]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:53.224456 ignition[825]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:53.224536 ignition[825]: parsed url from cmdline: "" Apr 24 00:14:53.224540 ignition[825]: no config URL provided Apr 24 00:14:53.224545 ignition[825]: reading system config file "/usr/lib/ignition/user.ign" Apr 24 00:14:53.224554 ignition[825]: no config at "/usr/lib/ignition/user.ign" Apr 24 00:14:53.224768 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #1 Apr 24 00:14:53.224936 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:14:53.425906 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #2 Apr 24 00:14:53.426094 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:14:53.826480 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #3 Apr 24 00:14:53.826639 ignition[825]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 24 00:14:53.968914 systemd-networkd[820]: eth0: DHCPv4 address 172.234.215.230/24, gateway 172.234.215.1 acquired from 23.205.167.175 Apr 24 00:14:54.627545 ignition[825]: PUT http://169.254.169.254/v1/token: attempt #4 Apr 24 00:14:54.721547 ignition[825]: PUT result: OK Apr 24 00:14:54.722333 ignition[825]: GET http://169.254.169.254/v1/user-data: attempt #1 Apr 24 00:14:54.756212 systemd-networkd[820]: eth0: Gained IPv6LL Apr 24 00:14:54.853679 ignition[825]: GET result: OK Apr 24 00:14:54.853786 ignition[825]: parsing config with SHA512: f2a5aaa95e08dd8917d5e8d26df94e8d03ab68f4cc4ccb4270b19573f370dd953a43a69f9b68a6df03decca5c12c7af8d45d546bc627d18ea65199e106d35df2 Apr 24 00:14:54.859613 unknown[825]: fetched base config from "system" Apr 24 00:14:54.859902 ignition[825]: fetch: fetch complete Apr 24 00:14:54.859627 unknown[825]: fetched base config from "system" Apr 24 00:14:54.859907 ignition[825]: fetch: fetch passed Apr 24 00:14:54.859632 unknown[825]: fetched user config from "akamai" Apr 24 00:14:54.859947 ignition[825]: Ignition finished successfully Apr 24 00:14:54.867672 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 24 00:14:54.884518 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 24 00:14:54.911031 ignition[833]: Ignition 2.22.0 Apr 24 00:14:54.912029 ignition[833]: Stage: kargs Apr 24 00:14:54.912143 ignition[833]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:54.912155 ignition[833]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:54.912983 ignition[833]: kargs: kargs passed Apr 24 00:14:54.913022 ignition[833]: Ignition finished successfully Apr 24 00:14:54.916094 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 24 00:14:54.919006 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 24 00:14:54.945604 ignition[839]: Ignition 2.22.0 Apr 24 00:14:54.946487 ignition[839]: Stage: disks Apr 24 00:14:54.946630 ignition[839]: no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:54.946641 ignition[839]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:54.949099 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 24 00:14:54.947346 ignition[839]: disks: disks passed Apr 24 00:14:54.950225 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 24 00:14:54.947385 ignition[839]: Ignition finished successfully Apr 24 00:14:54.951593 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 24 00:14:54.953080 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:14:54.954379 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:14:54.955924 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:14:54.958134 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 24 00:14:54.979838 systemd-fsck[847]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 24 00:14:54.982533 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 24 00:14:54.985264 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 24 00:14:55.094873 kernel: EXT4-fs (sda9): mounted filesystem 8c3ace63-1728-4b5e-a7b6-4ef650e41ba1 r/w with ordered data mode. Quota mode: none. Apr 24 00:14:55.096020 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 24 00:14:55.097301 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 24 00:14:55.099721 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:14:55.101218 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 24 00:14:55.103564 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 24 00:14:55.103607 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 24 00:14:55.103630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:14:55.113603 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 24 00:14:55.115105 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 24 00:14:55.126769 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (855) Apr 24 00:14:55.126801 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:14:55.130868 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:14:55.139105 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:14:55.139134 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:14:55.139147 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:14:55.144282 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:14:55.168218 initrd-setup-root[879]: cut: /sysroot/etc/passwd: No such file or directory Apr 24 00:14:55.174251 initrd-setup-root[886]: cut: /sysroot/etc/group: No such file or directory Apr 24 00:14:55.178820 initrd-setup-root[893]: cut: /sysroot/etc/shadow: No such file or directory Apr 24 00:14:55.183875 initrd-setup-root[900]: cut: /sysroot/etc/gshadow: No such file or directory Apr 24 00:14:55.271244 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 24 00:14:55.273506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 24 00:14:55.275933 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 24 00:14:55.297339 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 24 00:14:55.301871 kernel: BTRFS info (device sda6): last unmount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:14:55.316071 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 24 00:14:55.327830 ignition[968]: INFO : Ignition 2.22.0 Apr 24 00:14:55.329322 ignition[968]: INFO : Stage: mount Apr 24 00:14:55.329322 ignition[968]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:55.329322 ignition[968]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:55.331817 ignition[968]: INFO : mount: mount passed Apr 24 00:14:55.331817 ignition[968]: INFO : Ignition finished successfully Apr 24 00:14:55.331302 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 24 00:14:55.334747 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 24 00:14:56.097311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 24 00:14:56.121886 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (980) Apr 24 00:14:56.121922 kernel: BTRFS info (device sda6): first mount of filesystem 198e7c3b-b6f6-48f6-8d3f-d053e5a12995 Apr 24 00:14:56.125488 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 24 00:14:56.134191 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 24 00:14:56.134214 kernel: BTRFS info (device sda6): turning on async discard Apr 24 00:14:56.134226 kernel: BTRFS info (device sda6): enabling free space tree Apr 24 00:14:56.138235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 24 00:14:56.168909 ignition[997]: INFO : Ignition 2.22.0 Apr 24 00:14:56.168909 ignition[997]: INFO : Stage: files Apr 24 00:14:56.170540 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:56.170540 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:56.170540 ignition[997]: DEBUG : files: compiled without relabeling support, skipping Apr 24 00:14:56.170540 ignition[997]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 24 00:14:56.170540 ignition[997]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 24 00:14:56.175250 ignition[997]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 24 00:14:56.175250 ignition[997]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 24 00:14:56.175250 ignition[997]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 24 00:14:56.175250 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:14:56.175250 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 24 00:14:56.173311 unknown[997]: wrote ssh authorized keys file for user: core Apr 24 00:14:56.479671 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 24 00:14:56.550684 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 00:14:56.552259 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 24 00:14:57.123500 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 24 00:14:57.373120 ignition[997]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 24 00:14:57.373120 ignition[997]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Apr 24 00:14:57.376235 ignition[997]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:14:57.376235 ignition[997]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 24 00:14:57.376235 ignition[997]: INFO : files: files passed Apr 24 00:14:57.376235 ignition[997]: INFO : Ignition finished successfully Apr 24 00:14:57.379898 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 24 00:14:57.384024 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 24 00:14:57.391001 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 24 00:14:57.404484 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 24 00:14:57.404633 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 24 00:14:57.413133 initrd-setup-root-after-ignition[1026]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:14:57.413133 initrd-setup-root-after-ignition[1026]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:14:57.416708 initrd-setup-root-after-ignition[1030]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 24 00:14:57.418617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:14:57.421193 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 24 00:14:57.423613 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 24 00:14:57.475989 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 24 00:14:57.476163 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 24 00:14:57.478321 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 24 00:14:57.479618 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 24 00:14:57.481333 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 24 00:14:57.483013 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 24 00:14:57.508255 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:14:57.511231 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 24 00:14:57.534654 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:14:57.535673 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:14:57.537535 systemd[1]: Stopped target timers.target - Timer Units. Apr 24 00:14:57.539196 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 24 00:14:57.539419 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 24 00:14:57.541216 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 24 00:14:57.542387 systemd[1]: Stopped target basic.target - Basic System. Apr 24 00:14:57.544056 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 24 00:14:57.545633 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 24 00:14:57.547253 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 24 00:14:57.548932 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 24 00:14:57.551047 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 24 00:14:57.552719 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 24 00:14:57.554490 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 24 00:14:57.556171 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 24 00:14:57.558002 systemd[1]: Stopped target swap.target - Swaps. Apr 24 00:14:57.559616 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 24 00:14:57.559820 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 24 00:14:57.561628 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:14:57.562778 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:14:57.564369 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 24 00:14:57.564517 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:14:57.566086 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 24 00:14:57.566228 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 24 00:14:57.568479 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 24 00:14:57.568736 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 24 00:14:57.569940 systemd[1]: ignition-files.service: Deactivated successfully. Apr 24 00:14:57.570076 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 24 00:14:57.573953 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 24 00:14:57.576409 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 24 00:14:57.578335 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:14:57.585282 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 24 00:14:57.586193 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 24 00:14:57.586404 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:14:57.589021 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 24 00:14:57.589209 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 24 00:14:57.600087 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 24 00:14:57.601240 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 24 00:14:57.636137 ignition[1050]: INFO : Ignition 2.22.0 Apr 24 00:14:57.636137 ignition[1050]: INFO : Stage: umount Apr 24 00:14:57.640079 ignition[1050]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 24 00:14:57.640079 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Apr 24 00:14:57.640079 ignition[1050]: INFO : umount: umount passed Apr 24 00:14:57.640079 ignition[1050]: INFO : Ignition finished successfully Apr 24 00:14:57.639118 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 24 00:14:57.645254 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 24 00:14:57.645738 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 24 00:14:57.647326 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 24 00:14:57.647441 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 24 00:14:57.653075 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 24 00:14:57.653174 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 24 00:14:57.654977 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 24 00:14:57.655047 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 24 00:14:57.656460 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 24 00:14:57.656529 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 24 00:14:57.658350 systemd[1]: Stopped target network.target - Network. Apr 24 00:14:57.660134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 24 00:14:57.660193 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 24 00:14:57.661573 systemd[1]: Stopped target paths.target - Path Units. Apr 24 00:14:57.662904 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 24 00:14:57.666884 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:14:57.667967 systemd[1]: Stopped target slices.target - Slice Units. Apr 24 00:14:57.669609 systemd[1]: Stopped target sockets.target - Socket Units. Apr 24 00:14:57.671348 systemd[1]: iscsid.socket: Deactivated successfully. Apr 24 00:14:57.671391 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 24 00:14:57.672730 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 24 00:14:57.672770 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 24 00:14:57.674058 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 24 00:14:57.674109 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 24 00:14:57.675492 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 24 00:14:57.675540 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 24 00:14:57.676901 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 24 00:14:57.676964 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 24 00:14:57.678437 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 24 00:14:57.680007 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 24 00:14:57.683288 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 24 00:14:57.683401 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 24 00:14:57.686593 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 24 00:14:57.687344 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 24 00:14:57.687417 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:14:57.691190 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 24 00:14:57.691486 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 24 00:14:57.691617 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 24 00:14:57.694819 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 24 00:14:57.695264 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 24 00:14:57.697076 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 24 00:14:57.697121 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:14:57.699638 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 24 00:14:57.702098 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 24 00:14:57.702152 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 24 00:14:57.704344 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 24 00:14:57.704397 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:14:57.705957 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 24 00:14:57.706008 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 24 00:14:57.706830 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:14:57.713298 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 24 00:14:57.725224 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 24 00:14:57.725399 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:14:57.726627 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 24 00:14:57.726723 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 24 00:14:57.728601 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 24 00:14:57.728670 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 24 00:14:57.730003 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 24 00:14:57.730042 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:14:57.731348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 24 00:14:57.731399 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 24 00:14:57.733302 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 24 00:14:57.733352 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 24 00:14:57.735049 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 24 00:14:57.735101 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 24 00:14:57.737767 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 24 00:14:57.739723 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 24 00:14:57.739780 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:14:57.743866 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 24 00:14:57.743917 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:14:57.747136 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 24 00:14:57.747188 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:14:57.754827 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 24 00:14:57.754954 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 24 00:14:57.756362 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 24 00:14:57.758388 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 24 00:14:57.777599 systemd[1]: Switching root. Apr 24 00:14:57.811419 systemd-journald[187]: Journal stopped Apr 24 00:14:59.063308 systemd-journald[187]: Received SIGTERM from PID 1 (systemd). Apr 24 00:14:59.063335 kernel: SELinux: policy capability network_peer_controls=1 Apr 24 00:14:59.063347 kernel: SELinux: policy capability open_perms=1 Apr 24 00:14:59.063357 kernel: SELinux: policy capability extended_socket_class=1 Apr 24 00:14:59.063366 kernel: SELinux: policy capability always_check_network=0 Apr 24 00:14:59.063377 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 24 00:14:59.063387 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 24 00:14:59.063396 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 24 00:14:59.063406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 24 00:14:59.063415 kernel: SELinux: policy capability userspace_initial_context=0 Apr 24 00:14:59.063424 kernel: audit: type=1403 audit(1776989697.991:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 24 00:14:59.063435 systemd[1]: Successfully loaded SELinux policy in 89.564ms. Apr 24 00:14:59.063448 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.282ms. Apr 24 00:14:59.063459 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 24 00:14:59.063470 systemd[1]: Detected virtualization kvm. Apr 24 00:14:59.063480 systemd[1]: Detected architecture x86-64. Apr 24 00:14:59.063492 systemd[1]: Detected first boot. Apr 24 00:14:59.063502 systemd[1]: Initializing machine ID from random generator. Apr 24 00:14:59.063512 zram_generator::config[1094]: No configuration found. Apr 24 00:14:59.063524 kernel: Guest personality initialized and is inactive Apr 24 00:14:59.063534 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 24 00:14:59.063543 kernel: Initialized host personality Apr 24 00:14:59.063553 kernel: NET: Registered PF_VSOCK protocol family Apr 24 00:14:59.063563 systemd[1]: Populated /etc with preset unit settings. Apr 24 00:14:59.063576 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 24 00:14:59.063586 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 24 00:14:59.063596 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 24 00:14:59.063606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 24 00:14:59.063616 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 24 00:14:59.063626 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 24 00:14:59.063655 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 24 00:14:59.063668 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 24 00:14:59.063678 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 24 00:14:59.063837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 24 00:14:59.063891 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 24 00:14:59.063902 systemd[1]: Created slice user.slice - User and Session Slice. Apr 24 00:14:59.063913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 24 00:14:59.063923 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 24 00:14:59.063933 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 24 00:14:59.063947 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 24 00:14:59.063960 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 24 00:14:59.063971 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 24 00:14:59.063981 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 24 00:14:59.063992 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 24 00:14:59.064002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 24 00:14:59.064012 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 24 00:14:59.064025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 24 00:14:59.064035 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 24 00:14:59.064045 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 24 00:14:59.064056 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 24 00:14:59.064066 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 24 00:14:59.064076 systemd[1]: Reached target slices.target - Slice Units. Apr 24 00:14:59.064087 systemd[1]: Reached target swap.target - Swaps. Apr 24 00:14:59.064097 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 24 00:14:59.064109 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 24 00:14:59.064121 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 24 00:14:59.064132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 24 00:14:59.064143 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 24 00:14:59.064153 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 24 00:14:59.064166 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 24 00:14:59.064176 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 24 00:14:59.064186 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 24 00:14:59.064197 systemd[1]: Mounting media.mount - External Media Directory... Apr 24 00:14:59.064207 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:14:59.064218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 24 00:14:59.064228 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 24 00:14:59.064239 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 24 00:14:59.064251 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 24 00:14:59.064262 systemd[1]: Reached target machines.target - Containers. Apr 24 00:14:59.064272 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 24 00:14:59.064283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:14:59.064293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 24 00:14:59.064304 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 24 00:14:59.064314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:14:59.064325 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:14:59.064336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:14:59.064348 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 24 00:14:59.064359 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:14:59.064369 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 24 00:14:59.064380 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 24 00:14:59.064390 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 24 00:14:59.064401 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 24 00:14:59.064411 systemd[1]: Stopped systemd-fsck-usr.service. Apr 24 00:14:59.064422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:14:59.064434 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 24 00:14:59.064445 kernel: loop: module loaded Apr 24 00:14:59.064454 kernel: fuse: init (API version 7.41) Apr 24 00:14:59.064464 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 24 00:14:59.064475 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 24 00:14:59.064485 kernel: ACPI: bus type drm_connector registered Apr 24 00:14:59.064495 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 24 00:14:59.064505 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 24 00:14:59.064518 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 24 00:14:59.064650 systemd-journald[1182]: Collecting audit messages is disabled. Apr 24 00:14:59.064864 systemd-journald[1182]: Journal started Apr 24 00:14:59.064889 systemd-journald[1182]: Runtime Journal (/run/log/journal/7edd8ac58a2f4352b2e1ee3be5567f03) is 8M, max 78.2M, 70.2M free. Apr 24 00:14:58.681637 systemd[1]: Queued start job for default target multi-user.target. Apr 24 00:14:58.694158 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 24 00:14:58.694892 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 24 00:14:59.067880 systemd[1]: verity-setup.service: Deactivated successfully. Apr 24 00:14:59.071005 systemd[1]: Stopped verity-setup.service. Apr 24 00:14:59.082627 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:14:59.082656 systemd[1]: Started systemd-journald.service - Journal Service. Apr 24 00:14:59.086619 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 24 00:14:59.089225 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 24 00:14:59.090247 systemd[1]: Mounted media.mount - External Media Directory. Apr 24 00:14:59.091144 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 24 00:14:59.092221 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 24 00:14:59.094205 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 24 00:14:59.095192 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 24 00:14:59.096327 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 24 00:14:59.099329 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 24 00:14:59.099543 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 24 00:14:59.100781 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:14:59.101375 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:14:59.102642 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:14:59.103073 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:14:59.104070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:14:59.104264 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:14:59.107118 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 24 00:14:59.107344 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 24 00:14:59.108577 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:14:59.109014 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:14:59.110324 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 24 00:14:59.111682 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 24 00:14:59.113096 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 24 00:14:59.114201 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 24 00:14:59.127269 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 24 00:14:59.131172 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 24 00:14:59.136123 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 24 00:14:59.137911 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 24 00:14:59.137995 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 24 00:14:59.139744 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 24 00:14:59.148599 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 24 00:14:59.149462 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:14:59.152419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 24 00:14:59.154429 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 24 00:14:59.155368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:14:59.157417 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 24 00:14:59.160382 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:14:59.164023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 24 00:14:59.172053 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 24 00:14:59.183015 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 24 00:14:59.188244 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 24 00:14:59.189357 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 24 00:14:59.211104 systemd-journald[1182]: Time spent on flushing to /var/log/journal/7edd8ac58a2f4352b2e1ee3be5567f03 is 88.069ms for 1003 entries. Apr 24 00:14:59.211104 systemd-journald[1182]: System Journal (/var/log/journal/7edd8ac58a2f4352b2e1ee3be5567f03) is 8M, max 195.6M, 187.6M free. Apr 24 00:14:59.334466 systemd-journald[1182]: Received client request to flush runtime journal. Apr 24 00:14:59.334515 kernel: loop0: detected capacity change from 0 to 110984 Apr 24 00:14:59.334540 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 24 00:14:59.334559 kernel: loop1: detected capacity change from 0 to 128560 Apr 24 00:14:59.223196 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 24 00:14:59.226207 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 24 00:14:59.232288 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 24 00:14:59.272989 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 24 00:14:59.304410 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 24 00:14:59.314932 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 24 00:14:59.316199 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 24 00:14:59.319592 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 24 00:14:59.338700 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 24 00:14:59.368937 kernel: loop2: detected capacity change from 0 to 8 Apr 24 00:14:59.376023 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Apr 24 00:14:59.376295 systemd-tmpfiles[1234]: ACLs are not supported, ignoring. Apr 24 00:14:59.392706 kernel: loop3: detected capacity change from 0 to 228704 Apr 24 00:14:59.391200 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 24 00:14:59.427311 kernel: loop4: detected capacity change from 0 to 110984 Apr 24 00:14:59.440879 kernel: loop5: detected capacity change from 0 to 128560 Apr 24 00:14:59.461928 kernel: loop6: detected capacity change from 0 to 8 Apr 24 00:14:59.467863 kernel: loop7: detected capacity change from 0 to 228704 Apr 24 00:14:59.484346 (sd-merge)[1246]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Apr 24 00:14:59.485695 (sd-merge)[1246]: Merged extensions into '/usr'. Apr 24 00:14:59.495290 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Apr 24 00:14:59.495393 systemd[1]: Reloading... Apr 24 00:14:59.562968 zram_generator::config[1268]: No configuration found. Apr 24 00:14:59.722145 ldconfig[1214]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 24 00:14:59.836154 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 24 00:14:59.836319 systemd[1]: Reloading finished in 338 ms. Apr 24 00:14:59.854658 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 24 00:14:59.856174 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 24 00:14:59.857340 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 24 00:14:59.871338 systemd[1]: Starting ensure-sysext.service... Apr 24 00:14:59.875965 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 24 00:14:59.880482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 24 00:14:59.896189 systemd[1]: Reload requested from client PID 1316 ('systemctl') (unit ensure-sysext.service)... Apr 24 00:14:59.896282 systemd[1]: Reloading... Apr 24 00:14:59.904116 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 24 00:14:59.904160 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 24 00:14:59.904461 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 24 00:14:59.904724 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 24 00:14:59.906285 systemd-tmpfiles[1317]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 24 00:14:59.907789 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Apr 24 00:14:59.907955 systemd-tmpfiles[1317]: ACLs are not supported, ignoring. Apr 24 00:14:59.916214 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:14:59.916303 systemd-tmpfiles[1317]: Skipping /boot Apr 24 00:14:59.932529 systemd-tmpfiles[1317]: Detected autofs mount point /boot during canonicalization of boot. Apr 24 00:14:59.933974 systemd-tmpfiles[1317]: Skipping /boot Apr 24 00:14:59.952225 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Apr 24 00:14:59.991873 zram_generator::config[1347]: No configuration found. Apr 24 00:15:00.241662 systemd[1]: Reloading finished in 344 ms. Apr 24 00:15:00.252571 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 24 00:15:00.255178 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 24 00:15:00.292687 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 24 00:15:00.302059 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:15:00.306987 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 24 00:15:00.309192 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 24 00:15:00.317467 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 24 00:15:00.326031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 24 00:15:00.331158 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 24 00:15:00.345981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:15:00.346147 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:15:00.348202 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:15:00.374503 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 24 00:15:00.382476 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:15:00.401655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:15:00.403962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:15:00.404125 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:15:00.404261 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:15:00.406527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:15:00.407585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:15:00.444744 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 24 00:15:00.459369 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:15:00.459643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:15:00.463864 kernel: mousedev: PS/2 mouse device common for all mice Apr 24 00:15:00.466443 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 24 00:15:00.468508 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:15:00.469548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:15:00.479449 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Apr 24 00:15:00.486022 kernel: ACPI: button: Power Button [PWRF] Apr 24 00:15:00.495585 augenrules[1468]: No rules Apr 24 00:15:00.497903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 24 00:15:00.499713 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:15:00.500533 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:15:00.550357 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:15:00.552328 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:15:00.554217 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 24 00:15:00.555691 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 24 00:15:00.559508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 24 00:15:00.569887 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 24 00:15:00.573880 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 24 00:15:00.576072 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 24 00:15:00.581237 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 24 00:15:00.583157 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 24 00:15:00.584417 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 24 00:15:00.585931 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 24 00:15:00.590165 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 24 00:15:00.593566 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 24 00:15:00.594296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 24 00:15:00.594380 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 24 00:15:00.595983 systemd[1]: Finished ensure-sysext.service. Apr 24 00:15:00.598380 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 24 00:15:00.607883 augenrules[1480]: /sbin/augenrules: No change Apr 24 00:15:00.599123 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 24 00:15:00.609936 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 24 00:15:00.617114 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 24 00:15:00.625787 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 24 00:15:00.628445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 24 00:15:00.628707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 24 00:15:00.651156 augenrules[1511]: No rules Apr 24 00:15:00.652360 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:15:00.652901 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:15:00.655395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 24 00:15:00.655625 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 24 00:15:00.658321 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 24 00:15:00.658556 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 24 00:15:00.666569 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 24 00:15:00.666637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 24 00:15:00.678220 kernel: EDAC MC: Ver: 3.0.0 Apr 24 00:15:00.680142 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 24 00:15:00.786659 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 24 00:15:00.898919 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 24 00:15:00.936257 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 24 00:15:00.937110 systemd[1]: Reached target time-set.target - System Time Set. Apr 24 00:15:00.940970 systemd-networkd[1428]: lo: Link UP Apr 24 00:15:00.940981 systemd-networkd[1428]: lo: Gained carrier Apr 24 00:15:00.942730 systemd-networkd[1428]: Enumeration completed Apr 24 00:15:00.942750 systemd-timesyncd[1504]: No network connectivity, watching for changes. Apr 24 00:15:00.942815 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 24 00:15:00.943144 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:15:00.943149 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 24 00:15:00.944722 systemd-networkd[1428]: eth0: Link UP Apr 24 00:15:00.944977 systemd-networkd[1428]: eth0: Gained carrier Apr 24 00:15:00.945038 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 24 00:15:00.946265 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 24 00:15:00.948597 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 24 00:15:00.958058 systemd-resolved[1429]: Positive Trust Anchors: Apr 24 00:15:00.958076 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 24 00:15:00.958107 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 24 00:15:00.961641 systemd-resolved[1429]: Defaulting to hostname 'linux'. Apr 24 00:15:00.963107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 24 00:15:00.963948 systemd[1]: Reached target network.target - Network. Apr 24 00:15:00.965948 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 24 00:15:00.967305 systemd[1]: Reached target sysinit.target - System Initialization. Apr 24 00:15:00.968190 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 24 00:15:00.969154 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 24 00:15:00.969984 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 24 00:15:00.970951 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 24 00:15:00.971812 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 24 00:15:00.972649 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 24 00:15:00.973512 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 24 00:15:00.973540 systemd[1]: Reached target paths.target - Path Units. Apr 24 00:15:00.974224 systemd[1]: Reached target timers.target - Timer Units. Apr 24 00:15:00.976060 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 24 00:15:00.978323 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 24 00:15:00.981034 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 24 00:15:00.981994 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 24 00:15:00.982795 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 24 00:15:00.985640 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 24 00:15:00.986744 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 24 00:15:00.988527 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 24 00:15:00.989520 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 24 00:15:00.991480 systemd[1]: Reached target sockets.target - Socket Units. Apr 24 00:15:00.992269 systemd[1]: Reached target basic.target - Basic System. Apr 24 00:15:00.993090 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:15:00.993186 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 24 00:15:00.994579 systemd[1]: Starting containerd.service - containerd container runtime... Apr 24 00:15:00.997958 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 24 00:15:01.002001 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 24 00:15:01.006091 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 24 00:15:01.009040 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 24 00:15:01.012981 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 24 00:15:01.013944 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 24 00:15:01.016312 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 24 00:15:01.045490 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 24 00:15:01.048108 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 24 00:15:01.049882 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing passwd entry cache Apr 24 00:15:01.048946 oslogin_cache_refresh[1547]: Refreshing passwd entry cache Apr 24 00:15:01.050730 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 24 00:15:01.052560 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting users, quitting Apr 24 00:15:01.052602 oslogin_cache_refresh[1547]: Failure getting users, quitting Apr 24 00:15:01.052657 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:15:01.052699 oslogin_cache_refresh[1547]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 24 00:15:01.052772 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Refreshing group entry cache Apr 24 00:15:01.052801 oslogin_cache_refresh[1547]: Refreshing group entry cache Apr 24 00:15:01.053317 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Failure getting groups, quitting Apr 24 00:15:01.054584 oslogin_cache_refresh[1547]: Failure getting groups, quitting Apr 24 00:15:01.054915 google_oslogin_nss_cache[1547]: oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:15:01.054600 oslogin_cache_refresh[1547]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 24 00:15:01.058048 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 24 00:15:01.064340 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 24 00:15:01.066568 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 24 00:15:01.069176 jq[1545]: false Apr 24 00:15:01.076667 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 24 00:15:01.078028 systemd[1]: Starting update-engine.service - Update Engine... Apr 24 00:15:01.082716 extend-filesystems[1546]: Found /dev/sda6 Apr 24 00:15:01.090384 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 24 00:15:01.096996 coreos-metadata[1542]: Apr 24 00:15:01.096 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 00:15:01.099214 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 24 00:15:01.100285 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 24 00:15:01.100535 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 24 00:15:01.102111 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 24 00:15:01.102362 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 24 00:15:01.102673 update_engine[1559]: I20260424 00:15:01.102613 1559 main.cc:92] Flatcar Update Engine starting Apr 24 00:15:01.104171 extend-filesystems[1546]: Found /dev/sda9 Apr 24 00:15:01.107406 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 24 00:15:01.107657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 24 00:15:01.119876 extend-filesystems[1546]: Checking size of /dev/sda9 Apr 24 00:15:01.143090 jq[1561]: true Apr 24 00:15:01.151765 dbus-daemon[1543]: [system] SELinux support is enabled Apr 24 00:15:01.152165 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 24 00:15:01.156258 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 24 00:15:01.156289 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 24 00:15:01.157260 update_engine[1559]: I20260424 00:15:01.157123 1559 update_check_scheduler.cc:74] Next update check in 9m31s Apr 24 00:15:01.158059 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 24 00:15:01.158085 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 24 00:15:01.159993 systemd[1]: Started update-engine.service - Update Engine. Apr 24 00:15:01.163059 extend-filesystems[1546]: Resized partition /dev/sda9 Apr 24 00:15:01.168023 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 24 00:15:01.169075 extend-filesystems[1590]: resize2fs 1.47.3 (8-Jul-2025) Apr 24 00:15:01.169802 systemd[1]: motdgen.service: Deactivated successfully. Apr 24 00:15:01.170090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 24 00:15:01.174222 tar[1565]: linux-amd64/LICENSE Apr 24 00:15:01.174508 tar[1565]: linux-amd64/helm Apr 24 00:15:01.182868 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 20360187 blocks Apr 24 00:15:01.181527 (ntainerd)[1582]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 24 00:15:01.185707 jq[1586]: true Apr 24 00:15:01.197025 systemd-logind[1555]: Watching system buttons on /dev/input/event2 (Power Button) Apr 24 00:15:01.197067 systemd-logind[1555]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 24 00:15:01.197404 systemd-logind[1555]: New seat seat0. Apr 24 00:15:01.199131 systemd[1]: Started systemd-logind.service - User Login Management. Apr 24 00:15:01.324699 bash[1613]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:15:01.328687 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 24 00:15:01.334811 systemd[1]: Starting sshkeys.service... Apr 24 00:15:01.401074 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 24 00:15:01.405866 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 24 00:15:01.422260 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 24 00:15:01.488652 containerd[1582]: time="2026-04-24T00:15:01Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 24 00:15:01.492167 containerd[1582]: time="2026-04-24T00:15:01.492143310Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 24 00:15:01.509838 coreos-metadata[1623]: Apr 24 00:15:01.509 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Apr 24 00:15:01.511075 containerd[1582]: time="2026-04-24T00:15:01.511036729Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.56µs" Apr 24 00:15:01.511114 containerd[1582]: time="2026-04-24T00:15:01.511073019Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 24 00:15:01.511114 containerd[1582]: time="2026-04-24T00:15:01.511095569Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 24 00:15:01.511301 containerd[1582]: time="2026-04-24T00:15:01.511278169Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 24 00:15:01.511331 containerd[1582]: time="2026-04-24T00:15:01.511304069Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 24 00:15:01.511350 containerd[1582]: time="2026-04-24T00:15:01.511334859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511424 containerd[1582]: time="2026-04-24T00:15:01.511402919Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511443 containerd[1582]: time="2026-04-24T00:15:01.511425069Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511708 containerd[1582]: time="2026-04-24T00:15:01.511685489Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511726 containerd[1582]: time="2026-04-24T00:15:01.511705899Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511726 containerd[1582]: time="2026-04-24T00:15:01.511719489Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511765 containerd[1582]: time="2026-04-24T00:15:01.511730269Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 24 00:15:01.511839 containerd[1582]: time="2026-04-24T00:15:01.511819799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 24 00:15:01.515719 containerd[1582]: time="2026-04-24T00:15:01.515695183Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:15:01.516139 containerd[1582]: time="2026-04-24T00:15:01.516119914Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 24 00:15:01.516552 containerd[1582]: time="2026-04-24T00:15:01.516517164Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 24 00:15:01.516627 containerd[1582]: time="2026-04-24T00:15:01.516610684Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 24 00:15:01.517892 containerd[1582]: time="2026-04-24T00:15:01.517652295Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 24 00:15:01.517999 containerd[1582]: time="2026-04-24T00:15:01.517981286Z" level=info msg="metadata content store policy set" policy=shared Apr 24 00:15:01.533461 containerd[1582]: time="2026-04-24T00:15:01.533433151Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 24 00:15:01.533868 containerd[1582]: time="2026-04-24T00:15:01.533677521Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 24 00:15:01.533950 containerd[1582]: time="2026-04-24T00:15:01.533934761Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 24 00:15:01.535282 containerd[1582]: time="2026-04-24T00:15:01.534988543Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 24 00:15:01.535282 containerd[1582]: time="2026-04-24T00:15:01.535102443Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 24 00:15:01.535282 containerd[1582]: time="2026-04-24T00:15:01.535114903Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 24 00:15:01.535282 containerd[1582]: time="2026-04-24T00:15:01.535126403Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535442183Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535467353Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535477703Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535485823Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535496633Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535614713Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535638673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535651813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535661903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535671053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535680173Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535689873Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535699253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535709973Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 24 00:15:01.536198 containerd[1582]: time="2026-04-24T00:15:01.535719503Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 24 00:15:01.536476 containerd[1582]: time="2026-04-24T00:15:01.535728853Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 24 00:15:01.536476 containerd[1582]: time="2026-04-24T00:15:01.535770863Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 24 00:15:01.536476 containerd[1582]: time="2026-04-24T00:15:01.535782363Z" level=info msg="Start snapshots syncer" Apr 24 00:15:01.536476 containerd[1582]: time="2026-04-24T00:15:01.535801733Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 24 00:15:01.538056 containerd[1582]: time="2026-04-24T00:15:01.537732305Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 24 00:15:01.538367 containerd[1582]: time="2026-04-24T00:15:01.538226966Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 24 00:15:01.538524 containerd[1582]: time="2026-04-24T00:15:01.538507286Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539693897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539728287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539740227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539792307Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539807647Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539818267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 24 00:15:01.539866 containerd[1582]: time="2026-04-24T00:15:01.539829437Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 24 00:15:01.540179 containerd[1582]: time="2026-04-24T00:15:01.540115518Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 24 00:15:01.540550 containerd[1582]: time="2026-04-24T00:15:01.540332068Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 24 00:15:01.540550 containerd[1582]: time="2026-04-24T00:15:01.540422208Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 24 00:15:01.540820 containerd[1582]: time="2026-04-24T00:15:01.540771428Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:15:01.540820 containerd[1582]: time="2026-04-24T00:15:01.540793698Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 24 00:15:01.541156 containerd[1582]: time="2026-04-24T00:15:01.540802838Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:15:01.541350 containerd[1582]: time="2026-04-24T00:15:01.541201089Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 24 00:15:01.541528 containerd[1582]: time="2026-04-24T00:15:01.541215699Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 24 00:15:01.541884 containerd[1582]: time="2026-04-24T00:15:01.541577999Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 24 00:15:01.541884 containerd[1582]: time="2026-04-24T00:15:01.541795909Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 24 00:15:01.541884 containerd[1582]: time="2026-04-24T00:15:01.541816269Z" level=info msg="runtime interface created" Apr 24 00:15:01.541884 containerd[1582]: time="2026-04-24T00:15:01.541822369Z" level=info msg="created NRI interface" Apr 24 00:15:01.542157 containerd[1582]: time="2026-04-24T00:15:01.542036060Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 24 00:15:01.542407 containerd[1582]: time="2026-04-24T00:15:01.542120310Z" level=info msg="Connect containerd service" Apr 24 00:15:01.542407 containerd[1582]: time="2026-04-24T00:15:01.542314280Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 24 00:15:01.545732 containerd[1582]: time="2026-04-24T00:15:01.545591073Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:15:01.557015 kernel: EXT4-fs (sda9): resized filesystem to 20360187 Apr 24 00:15:01.581690 extend-filesystems[1590]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 24 00:15:01.581690 extend-filesystems[1590]: old_desc_blocks = 1, new_desc_blocks = 10 Apr 24 00:15:01.581690 extend-filesystems[1590]: The filesystem on /dev/sda9 is now 20360187 (4k) blocks long. Apr 24 00:15:01.593667 extend-filesystems[1546]: Resized filesystem in /dev/sda9 Apr 24 00:15:01.584691 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 24 00:15:01.585614 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 24 00:15:01.701673 containerd[1582]: time="2026-04-24T00:15:01.701629549Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 24 00:15:01.701765 containerd[1582]: time="2026-04-24T00:15:01.701708729Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 24 00:15:01.701765 containerd[1582]: time="2026-04-24T00:15:01.701730829Z" level=info msg="Start subscribing containerd event" Apr 24 00:15:01.701765 containerd[1582]: time="2026-04-24T00:15:01.701752429Z" level=info msg="Start recovering state" Apr 24 00:15:01.701859 containerd[1582]: time="2026-04-24T00:15:01.701831349Z" level=info msg="Start event monitor" Apr 24 00:15:01.703855 containerd[1582]: time="2026-04-24T00:15:01.703215281Z" level=info msg="Start cni network conf syncer for default" Apr 24 00:15:01.703855 containerd[1582]: time="2026-04-24T00:15:01.703230971Z" level=info msg="Start streaming server" Apr 24 00:15:01.703855 containerd[1582]: time="2026-04-24T00:15:01.703245851Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 24 00:15:01.703855 containerd[1582]: time="2026-04-24T00:15:01.703252881Z" level=info msg="runtime interface starting up..." Apr 24 00:15:01.703855 containerd[1582]: time="2026-04-24T00:15:01.703258451Z" level=info msg="starting plugins..." Apr 24 00:15:01.703855 containerd[1582]: time="2026-04-24T00:15:01.703275841Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 24 00:15:01.703493 systemd[1]: Started containerd.service - containerd container runtime. Apr 24 00:15:01.705626 containerd[1582]: time="2026-04-24T00:15:01.705603893Z" level=info msg="containerd successfully booted in 0.217493s" Apr 24 00:15:01.779905 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 24 00:15:01.811391 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 24 00:15:01.819265 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 24 00:15:01.845718 tar[1565]: linux-amd64/README.md Apr 24 00:15:01.852494 systemd[1]: issuegen.service: Deactivated successfully. Apr 24 00:15:01.853029 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 24 00:15:01.861167 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 24 00:15:01.869085 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 24 00:15:01.880322 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 24 00:15:01.883555 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 24 00:15:01.887110 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 24 00:15:01.888033 systemd[1]: Reached target getty.target - Login Prompts. Apr 24 00:15:02.113966 coreos-metadata[1542]: Apr 24 00:15:02.113 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 00:15:02.256965 systemd-networkd[1428]: eth0: DHCPv4 address 172.234.215.230/24, gateway 172.234.215.1 acquired from 23.205.167.175 Apr 24 00:15:02.257094 dbus-daemon[1543]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1428 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 24 00:15:02.259784 systemd-timesyncd[1504]: Network configuration changed, trying to establish connection. Apr 24 00:15:02.263397 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 24 00:15:02.365956 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 24 00:15:02.366340 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 24 00:15:02.366999 dbus-daemon[1543]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1663 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 24 00:15:02.374399 systemd[1]: Starting polkit.service - Authorization Manager... Apr 24 00:15:02.480475 polkitd[1664]: Started polkitd version 126 Apr 24 00:15:02.485692 polkitd[1664]: Loading rules from directory /etc/polkit-1/rules.d Apr 24 00:15:02.486117 polkitd[1664]: Loading rules from directory /run/polkit-1/rules.d Apr 24 00:15:02.486186 polkitd[1664]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 24 00:15:02.486485 polkitd[1664]: Loading rules from directory /usr/local/share/polkit-1/rules.d Apr 24 00:15:02.486530 polkitd[1664]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Apr 24 00:15:02.486584 polkitd[1664]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 24 00:15:02.487183 polkitd[1664]: Finished loading, compiling and executing 2 rules Apr 24 00:15:02.487451 systemd[1]: Started polkit.service - Authorization Manager. Apr 24 00:15:02.488306 dbus-daemon[1543]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 24 00:15:02.488799 polkitd[1664]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 24 00:15:02.499489 systemd-hostnamed[1663]: Hostname set to <172-234-215-230> (transient) Apr 24 00:15:02.499863 systemd-resolved[1429]: System hostname changed to '172-234-215-230'. Apr 24 00:15:02.525909 coreos-metadata[1623]: Apr 24 00:15:02.525 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Apr 24 00:15:02.621554 coreos-metadata[1623]: Apr 24 00:15:02.621 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Apr 24 00:15:02.755984 coreos-metadata[1623]: Apr 24 00:15:02.755 INFO Fetch successful Apr 24 00:15:02.778677 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Apr 24 00:15:02.780093 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 24 00:15:02.782667 systemd[1]: Finished sshkeys.service. Apr 24 00:15:02.820008 systemd-networkd[1428]: eth0: Gained IPv6LL Apr 24 00:15:02.823512 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 24 00:15:02.824980 systemd[1]: Reached target network-online.target - Network is Online. Apr 24 00:15:02.827818 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:02.832071 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 24 00:15:02.858034 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 24 00:15:03.730068 systemd-timesyncd[1504]: Contacted time server 216.229.0.49:123 (1.flatcar.pool.ntp.org). Apr 24 00:15:03.730148 systemd-timesyncd[1504]: Initial clock synchronization to Fri 2026-04-24 00:15:03.729930 UTC. Apr 24 00:15:03.730298 systemd-resolved[1429]: Clock change detected. Flushing caches. Apr 24 00:15:04.588817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:04.602109 (kubelet)[1696]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:15:04.977892 coreos-metadata[1542]: Apr 24 00:15:04.977 INFO Putting http://169.254.169.254/v1/token: Attempt #3 Apr 24 00:15:05.079150 coreos-metadata[1542]: Apr 24 00:15:05.079 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Apr 24 00:15:05.207711 kubelet[1696]: E0424 00:15:05.207653 1696 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:15:05.211594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:15:05.211858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:15:05.212226 systemd[1]: kubelet.service: Consumed 959ms CPU time, 266.2M memory peak. Apr 24 00:15:05.267323 coreos-metadata[1542]: Apr 24 00:15:05.267 INFO Fetch successful Apr 24 00:15:05.267475 coreos-metadata[1542]: Apr 24 00:15:05.267 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Apr 24 00:15:05.525519 coreos-metadata[1542]: Apr 24 00:15:05.525 INFO Fetch successful Apr 24 00:15:05.535325 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 24 00:15:05.539889 systemd[1]: Started sshd@0-172.234.215.230:22-20.229.252.112:53962.service - OpenSSH per-connection server daemon (20.229.252.112:53962). Apr 24 00:15:05.652672 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 24 00:15:05.655298 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 24 00:15:05.655539 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 24 00:15:05.719502 systemd[1]: Startup finished in 3.104s (kernel) + 8.313s (initrd) + 6.962s (userspace) = 18.380s. Apr 24 00:15:06.095945 sshd[1708]: Accepted publickey for core from 20.229.252.112 port 53962 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:06.098809 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:06.107198 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 24 00:15:06.109685 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 24 00:15:06.118980 systemd-logind[1555]: New session 1 of user core. Apr 24 00:15:06.131053 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 24 00:15:06.136001 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 24 00:15:06.149563 (systemd)[1734]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 24 00:15:06.152802 systemd-logind[1555]: New session c1 of user core. Apr 24 00:15:06.285778 systemd[1734]: Queued start job for default target default.target. Apr 24 00:15:06.292869 systemd[1734]: Created slice app.slice - User Application Slice. Apr 24 00:15:06.292899 systemd[1734]: Reached target paths.target - Paths. Apr 24 00:15:06.292945 systemd[1734]: Reached target timers.target - Timers. Apr 24 00:15:06.294485 systemd[1734]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 24 00:15:06.305742 systemd[1734]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 24 00:15:06.305931 systemd[1734]: Reached target sockets.target - Sockets. Apr 24 00:15:06.306200 systemd[1734]: Reached target basic.target - Basic System. Apr 24 00:15:06.306357 systemd[1734]: Reached target default.target - Main User Target. Apr 24 00:15:06.306469 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 24 00:15:06.307615 systemd[1734]: Startup finished in 146ms. Apr 24 00:15:06.312802 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 24 00:15:06.633586 systemd[1]: Started sshd@1-172.234.215.230:22-20.229.252.112:46404.service - OpenSSH per-connection server daemon (20.229.252.112:46404). Apr 24 00:15:07.181210 sshd[1745]: Accepted publickey for core from 20.229.252.112 port 46404 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:07.184883 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:07.190117 systemd-logind[1555]: New session 2 of user core. Apr 24 00:15:07.197797 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 24 00:15:07.477991 sshd[1748]: Connection closed by 20.229.252.112 port 46404 Apr 24 00:15:07.479834 sshd-session[1745]: pam_unix(sshd:session): session closed for user core Apr 24 00:15:07.483776 systemd[1]: sshd@1-172.234.215.230:22-20.229.252.112:46404.service: Deactivated successfully. Apr 24 00:15:07.485967 systemd[1]: session-2.scope: Deactivated successfully. Apr 24 00:15:07.488376 systemd-logind[1555]: Session 2 logged out. Waiting for processes to exit. Apr 24 00:15:07.490053 systemd-logind[1555]: Removed session 2. Apr 24 00:15:07.584450 systemd[1]: Started sshd@2-172.234.215.230:22-20.229.252.112:46416.service - OpenSSH per-connection server daemon (20.229.252.112:46416). Apr 24 00:15:08.107664 sshd[1754]: Accepted publickey for core from 20.229.252.112 port 46416 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:08.108614 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:08.113829 systemd-logind[1555]: New session 3 of user core. Apr 24 00:15:08.119799 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 24 00:15:08.399163 sshd[1757]: Connection closed by 20.229.252.112 port 46416 Apr 24 00:15:08.400865 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Apr 24 00:15:08.405429 systemd[1]: sshd@2-172.234.215.230:22-20.229.252.112:46416.service: Deactivated successfully. Apr 24 00:15:08.407851 systemd[1]: session-3.scope: Deactivated successfully. Apr 24 00:15:08.408617 systemd-logind[1555]: Session 3 logged out. Waiting for processes to exit. Apr 24 00:15:08.410288 systemd-logind[1555]: Removed session 3. Apr 24 00:15:08.508933 systemd[1]: Started sshd@3-172.234.215.230:22-20.229.252.112:46418.service - OpenSSH per-connection server daemon (20.229.252.112:46418). Apr 24 00:15:09.050754 sshd[1763]: Accepted publickey for core from 20.229.252.112 port 46418 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:09.052352 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:09.057473 systemd-logind[1555]: New session 4 of user core. Apr 24 00:15:09.063756 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 24 00:15:09.357361 sshd[1766]: Connection closed by 20.229.252.112 port 46418 Apr 24 00:15:09.358808 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Apr 24 00:15:09.363405 systemd-logind[1555]: Session 4 logged out. Waiting for processes to exit. Apr 24 00:15:09.364424 systemd[1]: sshd@3-172.234.215.230:22-20.229.252.112:46418.service: Deactivated successfully. Apr 24 00:15:09.366544 systemd[1]: session-4.scope: Deactivated successfully. Apr 24 00:15:09.368514 systemd-logind[1555]: Removed session 4. Apr 24 00:15:09.465476 systemd[1]: Started sshd@4-172.234.215.230:22-20.229.252.112:46424.service - OpenSSH per-connection server daemon (20.229.252.112:46424). Apr 24 00:15:09.997444 sshd[1772]: Accepted publickey for core from 20.229.252.112 port 46424 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:09.999586 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:10.009408 systemd-logind[1555]: New session 5 of user core. Apr 24 00:15:10.015935 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 24 00:15:10.205903 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 24 00:15:10.206313 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:15:10.225971 sudo[1776]: pam_unix(sudo:session): session closed for user root Apr 24 00:15:10.322931 sshd[1775]: Connection closed by 20.229.252.112 port 46424 Apr 24 00:15:10.324888 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Apr 24 00:15:10.328611 systemd[1]: sshd@4-172.234.215.230:22-20.229.252.112:46424.service: Deactivated successfully. Apr 24 00:15:10.330864 systemd[1]: session-5.scope: Deactivated successfully. Apr 24 00:15:10.332256 systemd-logind[1555]: Session 5 logged out. Waiting for processes to exit. Apr 24 00:15:10.334337 systemd-logind[1555]: Removed session 5. Apr 24 00:15:10.430361 systemd[1]: Started sshd@5-172.234.215.230:22-20.229.252.112:46440.service - OpenSSH per-connection server daemon (20.229.252.112:46440). Apr 24 00:15:10.959786 sshd[1782]: Accepted publickey for core from 20.229.252.112 port 46440 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:10.961313 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:10.967116 systemd-logind[1555]: New session 6 of user core. Apr 24 00:15:10.975761 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 24 00:15:11.161460 sudo[1787]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 24 00:15:11.162179 sudo[1787]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:15:11.166325 sudo[1787]: pam_unix(sudo:session): session closed for user root Apr 24 00:15:11.172236 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 24 00:15:11.172542 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:15:11.184658 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 24 00:15:11.230301 augenrules[1809]: No rules Apr 24 00:15:11.231007 systemd[1]: audit-rules.service: Deactivated successfully. Apr 24 00:15:11.231317 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 24 00:15:11.232423 sudo[1786]: pam_unix(sudo:session): session closed for user root Apr 24 00:15:11.328815 sshd[1785]: Connection closed by 20.229.252.112 port 46440 Apr 24 00:15:11.330622 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Apr 24 00:15:11.335101 systemd-logind[1555]: Session 6 logged out. Waiting for processes to exit. Apr 24 00:15:11.335798 systemd[1]: sshd@5-172.234.215.230:22-20.229.252.112:46440.service: Deactivated successfully. Apr 24 00:15:11.338488 systemd[1]: session-6.scope: Deactivated successfully. Apr 24 00:15:11.340670 systemd-logind[1555]: Removed session 6. Apr 24 00:15:11.440263 systemd[1]: Started sshd@6-172.234.215.230:22-20.229.252.112:46452.service - OpenSSH per-connection server daemon (20.229.252.112:46452). Apr 24 00:15:11.991628 sshd[1818]: Accepted publickey for core from 20.229.252.112 port 46452 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:15:11.993293 sshd-session[1818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:15:11.998083 systemd-logind[1555]: New session 7 of user core. Apr 24 00:15:12.001761 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 24 00:15:12.202388 sudo[1822]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 24 00:15:12.202862 sudo[1822]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 24 00:15:12.501095 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 24 00:15:12.513992 (dockerd)[1841]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 24 00:15:12.769190 dockerd[1841]: time="2026-04-24T00:15:12.768600663Z" level=info msg="Starting up" Apr 24 00:15:12.769651 dockerd[1841]: time="2026-04-24T00:15:12.769618474Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 24 00:15:12.785551 dockerd[1841]: time="2026-04-24T00:15:12.785528990Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 24 00:15:12.810970 systemd[1]: var-lib-docker-metacopy\x2dcheck3501336133-merged.mount: Deactivated successfully. Apr 24 00:15:12.833279 dockerd[1841]: time="2026-04-24T00:15:12.833054017Z" level=info msg="Loading containers: start." Apr 24 00:15:12.843659 kernel: Initializing XFRM netlink socket Apr 24 00:15:13.111580 systemd-networkd[1428]: docker0: Link UP Apr 24 00:15:13.114575 dockerd[1841]: time="2026-04-24T00:15:13.114537839Z" level=info msg="Loading containers: done." Apr 24 00:15:13.126535 dockerd[1841]: time="2026-04-24T00:15:13.126210941Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 24 00:15:13.126535 dockerd[1841]: time="2026-04-24T00:15:13.126265611Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 24 00:15:13.126535 dockerd[1841]: time="2026-04-24T00:15:13.126341231Z" level=info msg="Initializing buildkit" Apr 24 00:15:13.128341 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3748108529-merged.mount: Deactivated successfully. Apr 24 00:15:13.151844 dockerd[1841]: time="2026-04-24T00:15:13.151822026Z" level=info msg="Completed buildkit initialization" Apr 24 00:15:13.155351 dockerd[1841]: time="2026-04-24T00:15:13.155280480Z" level=info msg="Daemon has completed initialization" Apr 24 00:15:13.155407 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 24 00:15:13.155488 dockerd[1841]: time="2026-04-24T00:15:13.155315120Z" level=info msg="API listen on /run/docker.sock" Apr 24 00:15:13.673569 containerd[1582]: time="2026-04-24T00:15:13.673446568Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 24 00:15:14.276557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255027476.mount: Deactivated successfully. Apr 24 00:15:15.443030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 24 00:15:15.446731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:15.664673 containerd[1582]: time="2026-04-24T00:15:15.664058498Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:15.666357 containerd[1582]: time="2026-04-24T00:15:15.666296370Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193995" Apr 24 00:15:15.666848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:15.668762 containerd[1582]: time="2026-04-24T00:15:15.668709922Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:15.671696 containerd[1582]: time="2026-04-24T00:15:15.671378805Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.997879407s" Apr 24 00:15:15.671696 containerd[1582]: time="2026-04-24T00:15:15.671417565Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 24 00:15:15.672825 containerd[1582]: time="2026-04-24T00:15:15.672781587Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 24 00:15:15.673421 containerd[1582]: time="2026-04-24T00:15:15.673106947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:15.676981 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 24 00:15:15.715020 kubelet[2117]: E0424 00:15:15.714871 2117 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 24 00:15:15.721054 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 24 00:15:15.721326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 24 00:15:15.722454 systemd[1]: kubelet.service: Consumed 212ms CPU time, 108.5M memory peak. Apr 24 00:15:17.121653 containerd[1582]: time="2026-04-24T00:15:17.120596454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:17.126066 containerd[1582]: time="2026-04-24T00:15:17.122290006Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171453" Apr 24 00:15:17.126066 containerd[1582]: time="2026-04-24T00:15:17.122401076Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:17.126066 containerd[1582]: time="2026-04-24T00:15:17.125980209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:17.127029 containerd[1582]: time="2026-04-24T00:15:17.126887600Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.453680743s" Apr 24 00:15:17.127029 containerd[1582]: time="2026-04-24T00:15:17.126925490Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 24 00:15:17.130562 containerd[1582]: time="2026-04-24T00:15:17.130485064Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 24 00:15:18.323218 containerd[1582]: time="2026-04-24T00:15:18.323159256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:18.324420 containerd[1582]: time="2026-04-24T00:15:18.324386678Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289762" Apr 24 00:15:18.325600 containerd[1582]: time="2026-04-24T00:15:18.325147378Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:18.329059 containerd[1582]: time="2026-04-24T00:15:18.329028782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:18.329918 containerd[1582]: time="2026-04-24T00:15:18.329830253Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.199258259s" Apr 24 00:15:18.329918 containerd[1582]: time="2026-04-24T00:15:18.329874123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 24 00:15:18.333183 containerd[1582]: time="2026-04-24T00:15:18.333156166Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 24 00:15:19.422154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1431573618.mount: Deactivated successfully. Apr 24 00:15:19.803935 containerd[1582]: time="2026-04-24T00:15:19.803882717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:19.804696 containerd[1582]: time="2026-04-24T00:15:19.804663087Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010717" Apr 24 00:15:19.805687 containerd[1582]: time="2026-04-24T00:15:19.805377128Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:19.807168 containerd[1582]: time="2026-04-24T00:15:19.807121750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:19.807907 containerd[1582]: time="2026-04-24T00:15:19.807863361Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.474673175s" Apr 24 00:15:19.807907 containerd[1582]: time="2026-04-24T00:15:19.807904441Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 24 00:15:19.808656 containerd[1582]: time="2026-04-24T00:15:19.808492041Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 24 00:15:20.302036 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175905850.mount: Deactivated successfully. Apr 24 00:15:20.995211 containerd[1582]: time="2026-04-24T00:15:20.995148468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:20.996120 containerd[1582]: time="2026-04-24T00:15:20.996093989Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942244" Apr 24 00:15:20.996866 containerd[1582]: time="2026-04-24T00:15:20.996573639Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:20.999281 containerd[1582]: time="2026-04-24T00:15:20.998881041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:21.000123 containerd[1582]: time="2026-04-24T00:15:21.000093103Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.191573082s" Apr 24 00:15:21.000167 containerd[1582]: time="2026-04-24T00:15:21.000124073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 24 00:15:21.000888 containerd[1582]: time="2026-04-24T00:15:21.000862173Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 24 00:15:21.522911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877672796.mount: Deactivated successfully. Apr 24 00:15:21.526667 containerd[1582]: time="2026-04-24T00:15:21.526598159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:15:21.527202 containerd[1582]: time="2026-04-24T00:15:21.527181230Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321144" Apr 24 00:15:21.527677 containerd[1582]: time="2026-04-24T00:15:21.527621410Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:15:21.529178 containerd[1582]: time="2026-04-24T00:15:21.529143232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 24 00:15:21.530332 containerd[1582]: time="2026-04-24T00:15:21.529824002Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 528.934789ms" Apr 24 00:15:21.530332 containerd[1582]: time="2026-04-24T00:15:21.529852842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 24 00:15:21.530733 containerd[1582]: time="2026-04-24T00:15:21.530691853Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 24 00:15:22.053003 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569967888.mount: Deactivated successfully. Apr 24 00:15:22.801276 containerd[1582]: time="2026-04-24T00:15:22.801230853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:22.802389 containerd[1582]: time="2026-04-24T00:15:22.802364764Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719432" Apr 24 00:15:22.802782 containerd[1582]: time="2026-04-24T00:15:22.802757935Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:22.805154 containerd[1582]: time="2026-04-24T00:15:22.805128917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:22.806126 containerd[1582]: time="2026-04-24T00:15:22.806101108Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.275371205s" Apr 24 00:15:22.806172 containerd[1582]: time="2026-04-24T00:15:22.806129398Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 24 00:15:25.781198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 24 00:15:25.783478 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:25.795344 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 00:15:25.795533 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 00:15:25.796058 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:25.799359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:25.829073 systemd[1]: Reload requested from client PID 2290 ('systemctl') (unit session-7.scope)... Apr 24 00:15:25.829086 systemd[1]: Reloading... Apr 24 00:15:25.946653 zram_generator::config[2335]: No configuration found. Apr 24 00:15:26.173220 systemd[1]: Reloading finished in 343 ms. Apr 24 00:15:26.225280 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 24 00:15:26.225382 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 24 00:15:26.225927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:26.225971 systemd[1]: kubelet.service: Consumed 132ms CPU time, 98.3M memory peak. Apr 24 00:15:26.228167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:26.407040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:26.416154 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:15:26.456653 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:15:26.456653 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 00:15:26.456653 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:15:26.456653 kubelet[2389]: I0424 00:15:26.456388 2389 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 00:15:26.693726 kubelet[2389]: I0424 00:15:26.693698 2389 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 00:15:26.693850 kubelet[2389]: I0424 00:15:26.693840 2389 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:15:26.694068 kubelet[2389]: I0424 00:15:26.694057 2389 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 00:15:26.731791 kubelet[2389]: E0424 00:15:26.731455 2389 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.234.215.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.215.230:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 24 00:15:26.735410 kubelet[2389]: I0424 00:15:26.735003 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:15:26.740674 kubelet[2389]: I0424 00:15:26.740653 2389 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:15:26.745998 kubelet[2389]: I0424 00:15:26.745783 2389 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 00:15:26.747018 kubelet[2389]: I0424 00:15:26.746975 2389 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:15:26.747159 kubelet[2389]: I0424 00:15:26.747007 2389 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-215-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:15:26.747159 kubelet[2389]: I0424 00:15:26.747155 2389 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 00:15:26.747299 kubelet[2389]: I0424 00:15:26.747163 2389 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 00:15:26.747299 kubelet[2389]: I0424 00:15:26.747285 2389 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:15:26.751785 kubelet[2389]: I0424 00:15:26.751766 2389 kubelet.go:480] "Attempting to sync node with API server" Apr 24 00:15:26.751785 kubelet[2389]: I0424 00:15:26.751784 2389 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:15:26.752658 kubelet[2389]: I0424 00:15:26.752406 2389 kubelet.go:386] "Adding apiserver pod source" Apr 24 00:15:26.754648 kubelet[2389]: I0424 00:15:26.754515 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:15:26.757663 kubelet[2389]: E0424 00:15:26.757614 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.234.215.230:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-215-230&limit=500&resourceVersion=0\": dial tcp 172.234.215.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 24 00:15:26.757981 kubelet[2389]: E0424 00:15:26.757953 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.234.215.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.215.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 24 00:15:26.758279 kubelet[2389]: I0424 00:15:26.758256 2389 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:15:26.758734 kubelet[2389]: I0424 00:15:26.758712 2389 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:15:26.759420 kubelet[2389]: W0424 00:15:26.759387 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 24 00:15:26.770684 kubelet[2389]: I0424 00:15:26.769006 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 00:15:26.770684 kubelet[2389]: I0424 00:15:26.769042 2389 server.go:1289] "Started kubelet" Apr 24 00:15:26.772065 kubelet[2389]: I0424 00:15:26.772041 2389 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:15:26.773206 kubelet[2389]: I0424 00:15:26.773192 2389 server.go:317] "Adding debug handlers to kubelet server" Apr 24 00:15:26.774045 kubelet[2389]: I0424 00:15:26.773991 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:15:26.774333 kubelet[2389]: I0424 00:15:26.774306 2389 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:15:26.775845 kubelet[2389]: I0424 00:15:26.775685 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 00:15:26.780401 kubelet[2389]: E0424 00:15:26.779126 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.215.230:6443/api/v1/namespaces/default/events\": dial tcp 172.234.215.230:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-215-230.18a922bdd995f968 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-215-230,UID:172-234-215-230,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-215-230,},FirstTimestamp:2026-04-24 00:15:26.76901924 +0000 UTC m=+0.347839549,LastTimestamp:2026-04-24 00:15:26.76901924 +0000 UTC m=+0.347839549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-215-230,}" Apr 24 00:15:26.782169 kubelet[2389]: I0424 00:15:26.780849 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:15:26.783704 kubelet[2389]: E0424 00:15:26.783676 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-234-215-230\" not found" Apr 24 00:15:26.783704 kubelet[2389]: I0424 00:15:26.783706 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 00:15:26.784039 kubelet[2389]: I0424 00:15:26.784017 2389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 00:15:26.784073 kubelet[2389]: I0424 00:15:26.784067 2389 reconciler.go:26] "Reconciler: start to sync state" Apr 24 00:15:26.784430 kubelet[2389]: E0424 00:15:26.784399 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.215.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.215.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 00:15:26.784652 kubelet[2389]: E0424 00:15:26.784598 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.215.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-215-230?timeout=10s\": dial tcp 172.234.215.230:6443: connect: connection refused" interval="200ms" Apr 24 00:15:26.787095 kubelet[2389]: I0424 00:15:26.787064 2389 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:15:26.787147 kubelet[2389]: I0424 00:15:26.787137 2389 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:15:26.789013 kubelet[2389]: E0424 00:15:26.788984 2389 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:15:26.789333 kubelet[2389]: I0424 00:15:26.789296 2389 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:15:26.805819 kubelet[2389]: I0424 00:15:26.805707 2389 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 00:15:26.807751 kubelet[2389]: I0424 00:15:26.807616 2389 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 00:15:26.807751 kubelet[2389]: I0424 00:15:26.807723 2389 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 00:15:26.807751 kubelet[2389]: I0424 00:15:26.807745 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:15:26.807751 kubelet[2389]: I0424 00:15:26.807754 2389 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 00:15:26.807909 kubelet[2389]: E0424 00:15:26.807794 2389 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:15:26.817234 kubelet[2389]: E0424 00:15:26.817198 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.234.215.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.215.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 24 00:15:26.823450 kubelet[2389]: I0424 00:15:26.823409 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 00:15:26.823450 kubelet[2389]: I0424 00:15:26.823469 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 00:15:26.823450 kubelet[2389]: I0424 00:15:26.823496 2389 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:15:26.825211 kubelet[2389]: I0424 00:15:26.825174 2389 policy_none.go:49] "None policy: Start" Apr 24 00:15:26.825211 kubelet[2389]: I0424 00:15:26.825203 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 00:15:26.825291 kubelet[2389]: I0424 00:15:26.825216 2389 state_mem.go:35] "Initializing new in-memory state store" Apr 24 00:15:26.831207 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 24 00:15:26.843024 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 24 00:15:26.846605 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 24 00:15:26.857905 kubelet[2389]: E0424 00:15:26.857864 2389 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:15:26.858545 kubelet[2389]: I0424 00:15:26.858301 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 00:15:26.858545 kubelet[2389]: I0424 00:15:26.858314 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:15:26.858545 kubelet[2389]: I0424 00:15:26.858503 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 00:15:26.860456 kubelet[2389]: E0424 00:15:26.860433 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:15:26.860519 kubelet[2389]: E0424 00:15:26.860471 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-215-230\" not found" Apr 24 00:15:26.920227 systemd[1]: Created slice kubepods-burstable-pod51bfe025014a5f36b792b09fa708bbe0.slice - libcontainer container kubepods-burstable-pod51bfe025014a5f36b792b09fa708bbe0.slice. Apr 24 00:15:26.935455 kubelet[2389]: E0424 00:15:26.935271 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:26.939904 systemd[1]: Created slice kubepods-burstable-pod76bbe4c3f6213c165aadd478746771c1.slice - libcontainer container kubepods-burstable-pod76bbe4c3f6213c165aadd478746771c1.slice. Apr 24 00:15:26.942180 kubelet[2389]: E0424 00:15:26.942144 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:26.944741 systemd[1]: Created slice kubepods-burstable-podf5617896eea12e9921c17be62e53473f.slice - libcontainer container kubepods-burstable-podf5617896eea12e9921c17be62e53473f.slice. Apr 24 00:15:26.947996 kubelet[2389]: E0424 00:15:26.947979 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:26.960659 kubelet[2389]: I0424 00:15:26.960474 2389 kubelet_node_status.go:75] "Attempting to register node" node="172-234-215-230" Apr 24 00:15:26.960858 kubelet[2389]: E0424 00:15:26.960827 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.215.230:6443/api/v1/nodes\": dial tcp 172.234.215.230:6443: connect: connection refused" node="172-234-215-230" Apr 24 00:15:26.985316 kubelet[2389]: E0424 00:15:26.985232 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.215.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-215-230?timeout=10s\": dial tcp 172.234.215.230:6443: connect: connection refused" interval="400ms" Apr 24 00:15:27.085795 kubelet[2389]: I0424 00:15:27.085769 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-ca-certs\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:27.085878 kubelet[2389]: I0424 00:15:27.085804 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-k8s-certs\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:27.085878 kubelet[2389]: I0424 00:15:27.085826 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-kubeconfig\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:27.085878 kubelet[2389]: I0424 00:15:27.085844 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:27.085878 kubelet[2389]: I0424 00:15:27.085863 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51bfe025014a5f36b792b09fa708bbe0-kubeconfig\") pod \"kube-scheduler-172-234-215-230\" (UID: \"51bfe025014a5f36b792b09fa708bbe0\") " pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:27.086025 kubelet[2389]: I0424 00:15:27.085887 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76bbe4c3f6213c165aadd478746771c1-ca-certs\") pod \"kube-apiserver-172-234-215-230\" (UID: \"76bbe4c3f6213c165aadd478746771c1\") " pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:27.086025 kubelet[2389]: I0424 00:15:27.085905 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-flexvolume-dir\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:27.086025 kubelet[2389]: I0424 00:15:27.085921 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76bbe4c3f6213c165aadd478746771c1-k8s-certs\") pod \"kube-apiserver-172-234-215-230\" (UID: \"76bbe4c3f6213c165aadd478746771c1\") " pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:27.086025 kubelet[2389]: I0424 00:15:27.085937 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76bbe4c3f6213c165aadd478746771c1-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-215-230\" (UID: \"76bbe4c3f6213c165aadd478746771c1\") " pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:27.163641 kubelet[2389]: I0424 00:15:27.163582 2389 kubelet_node_status.go:75] "Attempting to register node" node="172-234-215-230" Apr 24 00:15:27.163928 kubelet[2389]: E0424 00:15:27.163875 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.215.230:6443/api/v1/nodes\": dial tcp 172.234.215.230:6443: connect: connection refused" node="172-234-215-230" Apr 24 00:15:27.236070 kubelet[2389]: E0424 00:15:27.235974 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.236702 containerd[1582]: time="2026-04-24T00:15:27.236650558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-215-230,Uid:51bfe025014a5f36b792b09fa708bbe0,Namespace:kube-system,Attempt:0,}" Apr 24 00:15:27.243462 kubelet[2389]: E0424 00:15:27.243407 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.244099 containerd[1582]: time="2026-04-24T00:15:27.244076985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-215-230,Uid:76bbe4c3f6213c165aadd478746771c1,Namespace:kube-system,Attempt:0,}" Apr 24 00:15:27.253796 kubelet[2389]: E0424 00:15:27.253772 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.254177 containerd[1582]: time="2026-04-24T00:15:27.254157825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-215-230,Uid:f5617896eea12e9921c17be62e53473f,Namespace:kube-system,Attempt:0,}" Apr 24 00:15:27.268964 containerd[1582]: time="2026-04-24T00:15:27.268795080Z" level=info msg="connecting to shim 0f5dc81e2ab2c54a9acfd6141b774bf2be89b5d2904584ab539811be9498b3c7" address="unix:///run/containerd/s/541db13b530ad1ec8906580dbc70dd4403270c1bb4f1f2a7b90272ea61dbdf48" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:27.275991 containerd[1582]: time="2026-04-24T00:15:27.275759707Z" level=info msg="connecting to shim 64cfe82c287c0cf988efaed650cdbe97a34a4bbe1e9c7269676d5487d1c19278" address="unix:///run/containerd/s/13c1daca141fca47fba46d0e18aa1d25accfa6e47ead4ebda9f3ef790e553aca" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:27.286859 containerd[1582]: time="2026-04-24T00:15:27.286730188Z" level=info msg="connecting to shim 2f250f88f03cd234dbb9a0faaabbd58103ac759a9741e56e30beb90ad3192751" address="unix:///run/containerd/s/dc8eedcee3ed0a26db4c55abe2afad48033db68d87f8d21a346c179007ae0226" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:27.311126 systemd[1]: Started cri-containerd-0f5dc81e2ab2c54a9acfd6141b774bf2be89b5d2904584ab539811be9498b3c7.scope - libcontainer container 0f5dc81e2ab2c54a9acfd6141b774bf2be89b5d2904584ab539811be9498b3c7. Apr 24 00:15:27.327756 systemd[1]: Started cri-containerd-64cfe82c287c0cf988efaed650cdbe97a34a4bbe1e9c7269676d5487d1c19278.scope - libcontainer container 64cfe82c287c0cf988efaed650cdbe97a34a4bbe1e9c7269676d5487d1c19278. Apr 24 00:15:27.332932 systemd[1]: Started cri-containerd-2f250f88f03cd234dbb9a0faaabbd58103ac759a9741e56e30beb90ad3192751.scope - libcontainer container 2f250f88f03cd234dbb9a0faaabbd58103ac759a9741e56e30beb90ad3192751. Apr 24 00:15:27.386515 kubelet[2389]: E0424 00:15:27.386382 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.215.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-215-230?timeout=10s\": dial tcp 172.234.215.230:6443: connect: connection refused" interval="800ms" Apr 24 00:15:27.395134 containerd[1582]: time="2026-04-24T00:15:27.395046756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-215-230,Uid:51bfe025014a5f36b792b09fa708bbe0,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5dc81e2ab2c54a9acfd6141b774bf2be89b5d2904584ab539811be9498b3c7\"" Apr 24 00:15:27.397009 kubelet[2389]: E0424 00:15:27.396990 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.404648 containerd[1582]: time="2026-04-24T00:15:27.403540875Z" level=info msg="CreateContainer within sandbox \"0f5dc81e2ab2c54a9acfd6141b774bf2be89b5d2904584ab539811be9498b3c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 24 00:15:27.407667 containerd[1582]: time="2026-04-24T00:15:27.407624529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-215-230,Uid:f5617896eea12e9921c17be62e53473f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f250f88f03cd234dbb9a0faaabbd58103ac759a9741e56e30beb90ad3192751\"" Apr 24 00:15:27.408551 kubelet[2389]: E0424 00:15:27.408497 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.410370 containerd[1582]: time="2026-04-24T00:15:27.410339991Z" level=info msg="Container 303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:27.412137 containerd[1582]: time="2026-04-24T00:15:27.412078813Z" level=info msg="CreateContainer within sandbox \"2f250f88f03cd234dbb9a0faaabbd58103ac759a9741e56e30beb90ad3192751\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 24 00:15:27.420974 containerd[1582]: time="2026-04-24T00:15:27.419575871Z" level=info msg="CreateContainer within sandbox \"0f5dc81e2ab2c54a9acfd6141b774bf2be89b5d2904584ab539811be9498b3c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638\"" Apr 24 00:15:27.420974 containerd[1582]: time="2026-04-24T00:15:27.420361221Z" level=info msg="StartContainer for \"303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638\"" Apr 24 00:15:27.420974 containerd[1582]: time="2026-04-24T00:15:27.420544072Z" level=info msg="Container 1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:27.423292 containerd[1582]: time="2026-04-24T00:15:27.423270324Z" level=info msg="connecting to shim 303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638" address="unix:///run/containerd/s/541db13b530ad1ec8906580dbc70dd4403270c1bb4f1f2a7b90272ea61dbdf48" protocol=ttrpc version=3 Apr 24 00:15:27.428904 containerd[1582]: time="2026-04-24T00:15:27.428697920Z" level=info msg="CreateContainer within sandbox \"2f250f88f03cd234dbb9a0faaabbd58103ac759a9741e56e30beb90ad3192751\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f\"" Apr 24 00:15:27.428980 containerd[1582]: time="2026-04-24T00:15:27.428941880Z" level=info msg="StartContainer for \"1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f\"" Apr 24 00:15:27.429807 containerd[1582]: time="2026-04-24T00:15:27.429782091Z" level=info msg="connecting to shim 1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f" address="unix:///run/containerd/s/dc8eedcee3ed0a26db4c55abe2afad48033db68d87f8d21a346c179007ae0226" protocol=ttrpc version=3 Apr 24 00:15:27.434944 containerd[1582]: time="2026-04-24T00:15:27.434883176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-215-230,Uid:76bbe4c3f6213c165aadd478746771c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"64cfe82c287c0cf988efaed650cdbe97a34a4bbe1e9c7269676d5487d1c19278\"" Apr 24 00:15:27.435571 kubelet[2389]: E0424 00:15:27.435518 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.439578 containerd[1582]: time="2026-04-24T00:15:27.438936000Z" level=info msg="CreateContainer within sandbox \"64cfe82c287c0cf988efaed650cdbe97a34a4bbe1e9c7269676d5487d1c19278\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 24 00:15:27.451340 containerd[1582]: time="2026-04-24T00:15:27.451319072Z" level=info msg="Container 11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:27.452799 systemd[1]: Started cri-containerd-303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638.scope - libcontainer container 303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638. Apr 24 00:15:27.457465 containerd[1582]: time="2026-04-24T00:15:27.457435279Z" level=info msg="CreateContainer within sandbox \"64cfe82c287c0cf988efaed650cdbe97a34a4bbe1e9c7269676d5487d1c19278\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a\"" Apr 24 00:15:27.459709 containerd[1582]: time="2026-04-24T00:15:27.458827810Z" level=info msg="StartContainer for \"11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a\"" Apr 24 00:15:27.460059 containerd[1582]: time="2026-04-24T00:15:27.459969321Z" level=info msg="connecting to shim 11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a" address="unix:///run/containerd/s/13c1daca141fca47fba46d0e18aa1d25accfa6e47ead4ebda9f3ef790e553aca" protocol=ttrpc version=3 Apr 24 00:15:27.462755 systemd[1]: Started cri-containerd-1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f.scope - libcontainer container 1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f. Apr 24 00:15:27.486871 systemd[1]: Started cri-containerd-11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a.scope - libcontainer container 11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a. Apr 24 00:15:27.550930 containerd[1582]: time="2026-04-24T00:15:27.550841852Z" level=info msg="StartContainer for \"303046d735f9eccdb1e3014e1cdbf2040aadb64003e90297c8f40b7698bfa638\" returns successfully" Apr 24 00:15:27.563860 containerd[1582]: time="2026-04-24T00:15:27.563767615Z" level=info msg="StartContainer for \"1e7ef754d0634e1c0b03d782665ac49306e57dc01512889e17d2e7311eae5a0f\" returns successfully" Apr 24 00:15:27.567682 kubelet[2389]: I0424 00:15:27.567053 2389 kubelet_node_status.go:75] "Attempting to register node" node="172-234-215-230" Apr 24 00:15:27.567682 kubelet[2389]: E0424 00:15:27.567300 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.234.215.230:6443/api/v1/nodes\": dial tcp 172.234.215.230:6443: connect: connection refused" node="172-234-215-230" Apr 24 00:15:27.583765 containerd[1582]: time="2026-04-24T00:15:27.583728565Z" level=info msg="StartContainer for \"11770ba9701ae6d113bfdf458f93338e9dbb5e9577f6f95ee1e60e863dd0658a\" returns successfully" Apr 24 00:15:27.593506 kubelet[2389]: E0424 00:15:27.593474 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.234.215.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.215.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 24 00:15:27.831116 kubelet[2389]: E0424 00:15:27.831001 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:27.831194 kubelet[2389]: E0424 00:15:27.831117 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.834082 kubelet[2389]: E0424 00:15:27.834062 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:27.834162 kubelet[2389]: E0424 00:15:27.834144 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:27.840018 kubelet[2389]: E0424 00:15:27.839990 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:27.840114 kubelet[2389]: E0424 00:15:27.840105 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:28.371418 kubelet[2389]: I0424 00:15:28.371383 2389 kubelet_node_status.go:75] "Attempting to register node" node="172-234-215-230" Apr 24 00:15:28.576762 kubelet[2389]: E0424 00:15:28.576728 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-215-230\" not found" node="172-234-215-230" Apr 24 00:15:28.634699 kubelet[2389]: I0424 00:15:28.634559 2389 kubelet_node_status.go:78] "Successfully registered node" node="172-234-215-230" Apr 24 00:15:28.634699 kubelet[2389]: E0424 00:15:28.634590 2389 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-234-215-230\": node \"172-234-215-230\" not found" Apr 24 00:15:28.684726 kubelet[2389]: I0424 00:15:28.684554 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:28.689474 kubelet[2389]: E0424 00:15:28.689452 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-215-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:28.689474 kubelet[2389]: I0424 00:15:28.689473 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:28.690540 kubelet[2389]: E0424 00:15:28.690524 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-215-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:28.690540 kubelet[2389]: I0424 00:15:28.690538 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:28.691694 kubelet[2389]: E0424 00:15:28.691672 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-234-215-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:28.758559 kubelet[2389]: I0424 00:15:28.758533 2389 apiserver.go:52] "Watching apiserver" Apr 24 00:15:28.784712 kubelet[2389]: I0424 00:15:28.784688 2389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 00:15:28.837003 kubelet[2389]: I0424 00:15:28.836851 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:28.837003 kubelet[2389]: I0424 00:15:28.836940 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:28.838409 kubelet[2389]: E0424 00:15:28.838380 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-215-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:28.838504 kubelet[2389]: E0424 00:15:28.838490 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:28.838568 kubelet[2389]: E0424 00:15:28.838551 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-215-230\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:28.838654 kubelet[2389]: E0424 00:15:28.838622 2389 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:30.376291 systemd[1]: Reload requested from client PID 2661 ('systemctl') (unit session-7.scope)... Apr 24 00:15:30.376309 systemd[1]: Reloading... Apr 24 00:15:30.479680 zram_generator::config[2705]: No configuration found. Apr 24 00:15:30.706577 systemd[1]: Reloading finished in 329 ms. Apr 24 00:15:30.734772 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:30.745688 systemd[1]: kubelet.service: Deactivated successfully. Apr 24 00:15:30.746187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:30.746225 systemd[1]: kubelet.service: Consumed 717ms CPU time, 133M memory peak. Apr 24 00:15:30.748923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 24 00:15:30.940239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 24 00:15:30.946012 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 24 00:15:30.993483 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:15:30.993483 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 24 00:15:30.993483 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 24 00:15:30.994573 kubelet[2756]: I0424 00:15:30.994057 2756 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 24 00:15:31.001698 kubelet[2756]: I0424 00:15:31.001668 2756 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 24 00:15:31.001698 kubelet[2756]: I0424 00:15:31.001692 2756 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 24 00:15:31.002458 kubelet[2756]: I0424 00:15:31.002426 2756 server.go:956] "Client rotation is on, will bootstrap in background" Apr 24 00:15:31.005505 kubelet[2756]: I0424 00:15:31.004680 2756 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 24 00:15:31.007458 kubelet[2756]: I0424 00:15:31.007421 2756 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 24 00:15:31.011394 kubelet[2756]: I0424 00:15:31.011381 2756 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 24 00:15:31.015543 kubelet[2756]: I0424 00:15:31.015528 2756 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 24 00:15:31.015872 kubelet[2756]: I0424 00:15:31.015844 2756 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 24 00:15:31.016230 kubelet[2756]: I0424 00:15:31.015924 2756 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-215-230","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 24 00:15:31.016336 kubelet[2756]: I0424 00:15:31.016325 2756 topology_manager.go:138] "Creating topology manager with none policy" Apr 24 00:15:31.016386 kubelet[2756]: I0424 00:15:31.016379 2756 container_manager_linux.go:303] "Creating device plugin manager" Apr 24 00:15:31.016493 kubelet[2756]: I0424 00:15:31.016484 2756 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:15:31.016744 kubelet[2756]: I0424 00:15:31.016732 2756 kubelet.go:480] "Attempting to sync node with API server" Apr 24 00:15:31.016980 kubelet[2756]: I0424 00:15:31.016971 2756 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 24 00:15:31.017037 kubelet[2756]: I0424 00:15:31.017029 2756 kubelet.go:386] "Adding apiserver pod source" Apr 24 00:15:31.017088 kubelet[2756]: I0424 00:15:31.017080 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 24 00:15:31.023080 kubelet[2756]: I0424 00:15:31.023065 2756 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 24 00:15:31.023912 kubelet[2756]: I0424 00:15:31.023897 2756 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 24 00:15:31.027792 kubelet[2756]: I0424 00:15:31.027767 2756 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 24 00:15:31.027908 kubelet[2756]: I0424 00:15:31.027897 2756 server.go:1289] "Started kubelet" Apr 24 00:15:31.033383 kubelet[2756]: I0424 00:15:31.033321 2756 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 24 00:15:31.034324 kubelet[2756]: I0424 00:15:31.034312 2756 server.go:317] "Adding debug handlers to kubelet server" Apr 24 00:15:31.036945 kubelet[2756]: I0424 00:15:31.036927 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 24 00:15:31.039078 kubelet[2756]: I0424 00:15:31.039029 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 24 00:15:31.040212 kubelet[2756]: I0424 00:15:31.040095 2756 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 24 00:15:31.042233 kubelet[2756]: I0424 00:15:31.042219 2756 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 24 00:15:31.046521 kubelet[2756]: I0424 00:15:31.045819 2756 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 24 00:15:31.046521 kubelet[2756]: I0424 00:15:31.045903 2756 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 24 00:15:31.046521 kubelet[2756]: I0424 00:15:31.045998 2756 reconciler.go:26] "Reconciler: start to sync state" Apr 24 00:15:31.047226 kubelet[2756]: E0424 00:15:31.047210 2756 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 24 00:15:31.049641 kubelet[2756]: I0424 00:15:31.049602 2756 factory.go:223] Registration of the containerd container factory successfully Apr 24 00:15:31.049759 kubelet[2756]: I0424 00:15:31.049749 2756 factory.go:223] Registration of the systemd container factory successfully Apr 24 00:15:31.050273 kubelet[2756]: I0424 00:15:31.050255 2756 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 24 00:15:31.063780 kubelet[2756]: I0424 00:15:31.063759 2756 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 24 00:15:31.066815 kubelet[2756]: I0424 00:15:31.066552 2756 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 24 00:15:31.066864 kubelet[2756]: I0424 00:15:31.066830 2756 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 24 00:15:31.066864 kubelet[2756]: I0424 00:15:31.066847 2756 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 24 00:15:31.066864 kubelet[2756]: I0424 00:15:31.066853 2756 kubelet.go:2436] "Starting kubelet main sync loop" Apr 24 00:15:31.067199 kubelet[2756]: E0424 00:15:31.067140 2756 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 24 00:15:31.099483 kubelet[2756]: I0424 00:15:31.099447 2756 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 24 00:15:31.099483 kubelet[2756]: I0424 00:15:31.099461 2756 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 24 00:15:31.099483 kubelet[2756]: I0424 00:15:31.099478 2756 state_mem.go:36] "Initialized new in-memory state store" Apr 24 00:15:31.099601 kubelet[2756]: I0424 00:15:31.099581 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 24 00:15:31.099601 kubelet[2756]: I0424 00:15:31.099591 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 24 00:15:31.099694 kubelet[2756]: I0424 00:15:31.099607 2756 policy_none.go:49] "None policy: Start" Apr 24 00:15:31.099694 kubelet[2756]: I0424 00:15:31.099617 2756 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 24 00:15:31.099694 kubelet[2756]: I0424 00:15:31.099627 2756 state_mem.go:35] "Initializing new in-memory state store" Apr 24 00:15:31.099769 kubelet[2756]: I0424 00:15:31.099727 2756 state_mem.go:75] "Updated machine memory state" Apr 24 00:15:31.104581 kubelet[2756]: E0424 00:15:31.104449 2756 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 24 00:15:31.104660 kubelet[2756]: I0424 00:15:31.104585 2756 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 24 00:15:31.104660 kubelet[2756]: I0424 00:15:31.104603 2756 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 24 00:15:31.106403 kubelet[2756]: I0424 00:15:31.106348 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 24 00:15:31.107529 kubelet[2756]: E0424 00:15:31.107485 2756 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 24 00:15:31.168661 kubelet[2756]: I0424 00:15:31.168515 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:31.168661 kubelet[2756]: I0424 00:15:31.168603 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:31.169158 kubelet[2756]: I0424 00:15:31.168519 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:31.210843 kubelet[2756]: I0424 00:15:31.210808 2756 kubelet_node_status.go:75] "Attempting to register node" node="172-234-215-230" Apr 24 00:15:31.220011 kubelet[2756]: I0424 00:15:31.219945 2756 kubelet_node_status.go:124] "Node was previously registered" node="172-234-215-230" Apr 24 00:15:31.220357 kubelet[2756]: I0424 00:15:31.220312 2756 kubelet_node_status.go:78] "Successfully registered node" node="172-234-215-230" Apr 24 00:15:31.248661 kubelet[2756]: I0424 00:15:31.247431 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76bbe4c3f6213c165aadd478746771c1-ca-certs\") pod \"kube-apiserver-172-234-215-230\" (UID: \"76bbe4c3f6213c165aadd478746771c1\") " pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:31.348679 kubelet[2756]: I0424 00:15:31.348390 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76bbe4c3f6213c165aadd478746771c1-k8s-certs\") pod \"kube-apiserver-172-234-215-230\" (UID: \"76bbe4c3f6213c165aadd478746771c1\") " pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:31.348679 kubelet[2756]: I0424 00:15:31.348478 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76bbe4c3f6213c165aadd478746771c1-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-215-230\" (UID: \"76bbe4c3f6213c165aadd478746771c1\") " pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:31.348679 kubelet[2756]: I0424 00:15:31.348515 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-flexvolume-dir\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:31.348679 kubelet[2756]: I0424 00:15:31.348552 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-k8s-certs\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:31.348679 kubelet[2756]: I0424 00:15:31.348572 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-kubeconfig\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:31.349097 kubelet[2756]: I0424 00:15:31.348617 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:31.349097 kubelet[2756]: I0424 00:15:31.348672 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5617896eea12e9921c17be62e53473f-ca-certs\") pod \"kube-controller-manager-172-234-215-230\" (UID: \"f5617896eea12e9921c17be62e53473f\") " pod="kube-system/kube-controller-manager-172-234-215-230" Apr 24 00:15:31.349097 kubelet[2756]: I0424 00:15:31.348697 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/51bfe025014a5f36b792b09fa708bbe0-kubeconfig\") pod \"kube-scheduler-172-234-215-230\" (UID: \"51bfe025014a5f36b792b09fa708bbe0\") " pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:31.474675 kubelet[2756]: E0424 00:15:31.474263 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:31.474675 kubelet[2756]: E0424 00:15:31.474538 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:31.475521 kubelet[2756]: E0424 00:15:31.475432 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:32.020871 kubelet[2756]: I0424 00:15:32.020832 2756 apiserver.go:52] "Watching apiserver" Apr 24 00:15:32.046795 kubelet[2756]: I0424 00:15:32.046750 2756 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 24 00:15:32.084810 kubelet[2756]: I0424 00:15:32.084719 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:32.084995 kubelet[2756]: I0424 00:15:32.084704 2756 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:32.085129 kubelet[2756]: E0424 00:15:32.085064 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:32.099214 kubelet[2756]: E0424 00:15:32.099177 2756 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-234-215-230\" already exists" pod="kube-system/kube-scheduler-172-234-215-230" Apr 24 00:15:32.099327 kubelet[2756]: E0424 00:15:32.099306 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:32.099790 kubelet[2756]: E0424 00:15:32.099604 2756 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-234-215-230\" already exists" pod="kube-system/kube-apiserver-172-234-215-230" Apr 24 00:15:32.099790 kubelet[2756]: E0424 00:15:32.099740 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:32.117883 kubelet[2756]: I0424 00:15:32.117672 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-215-230" podStartSLOduration=1.117660398 podStartE2EDuration="1.117660398s" podCreationTimestamp="2026-04-24 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:15:32.108994369 +0000 UTC m=+1.156626497" watchObservedRunningTime="2026-04-24 00:15:32.117660398 +0000 UTC m=+1.165292526" Apr 24 00:15:32.124289 kubelet[2756]: I0424 00:15:32.124248 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-215-230" podStartSLOduration=1.124237594 podStartE2EDuration="1.124237594s" podCreationTimestamp="2026-04-24 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:15:32.118339718 +0000 UTC m=+1.165971846" watchObservedRunningTime="2026-04-24 00:15:32.124237594 +0000 UTC m=+1.171869722" Apr 24 00:15:32.134727 kubelet[2756]: I0424 00:15:32.134648 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-215-230" podStartSLOduration=1.134618455 podStartE2EDuration="1.134618455s" podCreationTimestamp="2026-04-24 00:15:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:15:32.125543336 +0000 UTC m=+1.173175464" watchObservedRunningTime="2026-04-24 00:15:32.134618455 +0000 UTC m=+1.182250583" Apr 24 00:15:33.086761 kubelet[2756]: E0424 00:15:33.086713 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:33.087369 kubelet[2756]: E0424 00:15:33.087344 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:33.088382 kubelet[2756]: E0424 00:15:33.088361 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:33.385960 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 24 00:15:34.087883 kubelet[2756]: E0424 00:15:34.087846 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:37.321490 kubelet[2756]: I0424 00:15:37.321445 2756 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 24 00:15:37.322388 kubelet[2756]: I0424 00:15:37.321946 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 24 00:15:37.322432 containerd[1582]: time="2026-04-24T00:15:37.321778737Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 24 00:15:38.061880 systemd[1]: Created slice kubepods-besteffort-pod3abd0452_eec1_454a_a4ec_a8d9bc9e7f1f.slice - libcontainer container kubepods-besteffort-pod3abd0452_eec1_454a_a4ec_a8d9bc9e7f1f.slice. Apr 24 00:15:38.087574 kubelet[2756]: I0424 00:15:38.087522 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f-kube-proxy\") pod \"kube-proxy-7k8d9\" (UID: \"3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f\") " pod="kube-system/kube-proxy-7k8d9" Apr 24 00:15:38.087718 kubelet[2756]: I0424 00:15:38.087584 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f-xtables-lock\") pod \"kube-proxy-7k8d9\" (UID: \"3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f\") " pod="kube-system/kube-proxy-7k8d9" Apr 24 00:15:38.087718 kubelet[2756]: I0424 00:15:38.087603 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f-lib-modules\") pod \"kube-proxy-7k8d9\" (UID: \"3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f\") " pod="kube-system/kube-proxy-7k8d9" Apr 24 00:15:38.087718 kubelet[2756]: I0424 00:15:38.087618 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8992\" (UniqueName: \"kubernetes.io/projected/3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f-kube-api-access-k8992\") pod \"kube-proxy-7k8d9\" (UID: \"3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f\") " pod="kube-system/kube-proxy-7k8d9" Apr 24 00:15:38.193721 kubelet[2756]: E0424 00:15:38.193678 2756 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 24 00:15:38.193721 kubelet[2756]: E0424 00:15:38.193704 2756 projected.go:194] Error preparing data for projected volume kube-api-access-k8992 for pod kube-system/kube-proxy-7k8d9: configmap "kube-root-ca.crt" not found Apr 24 00:15:38.193852 kubelet[2756]: E0424 00:15:38.193758 2756 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f-kube-api-access-k8992 podName:3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f nodeName:}" failed. No retries permitted until 2026-04-24 00:15:38.693740386 +0000 UTC m=+7.741372514 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k8992" (UniqueName: "kubernetes.io/projected/3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f-kube-api-access-k8992") pod "kube-proxy-7k8d9" (UID: "3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f") : configmap "kube-root-ca.crt" not found Apr 24 00:15:38.584170 systemd[1]: Created slice kubepods-besteffort-pod5255cd63_f5b1_49eb_a254_5fc94c9c7eab.slice - libcontainer container kubepods-besteffort-pod5255cd63_f5b1_49eb_a254_5fc94c9c7eab.slice. Apr 24 00:15:38.591305 kubelet[2756]: I0424 00:15:38.591252 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5255cd63-f5b1-49eb-a254-5fc94c9c7eab-var-lib-calico\") pod \"tigera-operator-8458958b4d-9mfrb\" (UID: \"5255cd63-f5b1-49eb-a254-5fc94c9c7eab\") " pod="tigera-operator/tigera-operator-8458958b4d-9mfrb" Apr 24 00:15:38.591305 kubelet[2756]: I0424 00:15:38.591294 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6drmq\" (UniqueName: \"kubernetes.io/projected/5255cd63-f5b1-49eb-a254-5fc94c9c7eab-kube-api-access-6drmq\") pod \"tigera-operator-8458958b4d-9mfrb\" (UID: \"5255cd63-f5b1-49eb-a254-5fc94c9c7eab\") " pod="tigera-operator/tigera-operator-8458958b4d-9mfrb" Apr 24 00:15:38.889500 containerd[1582]: time="2026-04-24T00:15:38.889052109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-9mfrb,Uid:5255cd63-f5b1-49eb-a254-5fc94c9c7eab,Namespace:tigera-operator,Attempt:0,}" Apr 24 00:15:38.916317 containerd[1582]: time="2026-04-24T00:15:38.916040018Z" level=info msg="connecting to shim 006db4305b5bd816c623f653367048c5ef76511171bc0321cf88130e9301e7dd" address="unix:///run/containerd/s/3ef45ba3e4ea91c5e20a2a00425ce1b12ec87f3e171f3c8cce40a02a73d41841" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:38.941781 systemd[1]: Started cri-containerd-006db4305b5bd816c623f653367048c5ef76511171bc0321cf88130e9301e7dd.scope - libcontainer container 006db4305b5bd816c623f653367048c5ef76511171bc0321cf88130e9301e7dd. Apr 24 00:15:38.971993 kubelet[2756]: E0424 00:15:38.971939 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:38.974291 containerd[1582]: time="2026-04-24T00:15:38.974167760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7k8d9,Uid:3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f,Namespace:kube-system,Attempt:0,}" Apr 24 00:15:39.007248 containerd[1582]: time="2026-04-24T00:15:39.007201852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-8458958b4d-9mfrb,Uid:5255cd63-f5b1-49eb-a254-5fc94c9c7eab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"006db4305b5bd816c623f653367048c5ef76511171bc0321cf88130e9301e7dd\"" Apr 24 00:15:39.010863 containerd[1582]: time="2026-04-24T00:15:39.010834762Z" level=info msg="connecting to shim d4e111f411a1063e5bfb87b0e3f6a0f7f564209b9126b0b0e1704902cbd65711" address="unix:///run/containerd/s/64ce11dfca7adabe3ebf64d95171d65d9e4bc28ff2da7cd079f54c2df24e0eed" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:39.012096 containerd[1582]: time="2026-04-24T00:15:39.011449755Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 24 00:15:39.037771 systemd[1]: Started cri-containerd-d4e111f411a1063e5bfb87b0e3f6a0f7f564209b9126b0b0e1704902cbd65711.scope - libcontainer container d4e111f411a1063e5bfb87b0e3f6a0f7f564209b9126b0b0e1704902cbd65711. Apr 24 00:15:39.065931 containerd[1582]: time="2026-04-24T00:15:39.065868468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7k8d9,Uid:3abd0452-eec1-454a-a4ec-a8d9bc9e7f1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4e111f411a1063e5bfb87b0e3f6a0f7f564209b9126b0b0e1704902cbd65711\"" Apr 24 00:15:39.066469 kubelet[2756]: E0424 00:15:39.066447 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:39.069803 containerd[1582]: time="2026-04-24T00:15:39.069772490Z" level=info msg="CreateContainer within sandbox \"d4e111f411a1063e5bfb87b0e3f6a0f7f564209b9126b0b0e1704902cbd65711\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 24 00:15:39.079315 containerd[1582]: time="2026-04-24T00:15:39.079275333Z" level=info msg="Container f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:39.084265 containerd[1582]: time="2026-04-24T00:15:39.084236181Z" level=info msg="CreateContainer within sandbox \"d4e111f411a1063e5bfb87b0e3f6a0f7f564209b9126b0b0e1704902cbd65711\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484\"" Apr 24 00:15:39.084790 containerd[1582]: time="2026-04-24T00:15:39.084736403Z" level=info msg="StartContainer for \"f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484\"" Apr 24 00:15:39.086127 containerd[1582]: time="2026-04-24T00:15:39.086106141Z" level=info msg="connecting to shim f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484" address="unix:///run/containerd/s/64ce11dfca7adabe3ebf64d95171d65d9e4bc28ff2da7cd079f54c2df24e0eed" protocol=ttrpc version=3 Apr 24 00:15:39.108757 systemd[1]: Started cri-containerd-f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484.scope - libcontainer container f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484. Apr 24 00:15:39.171984 containerd[1582]: time="2026-04-24T00:15:39.171731667Z" level=info msg="StartContainer for \"f4ca1a1ba8bc022fbce784e485c3008e1e4a9550ee88b68ccddc7dfe3469a484\" returns successfully" Apr 24 00:15:40.039839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525198500.mount: Deactivated successfully. Apr 24 00:15:40.102447 kubelet[2756]: E0424 00:15:40.102410 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:40.114656 kubelet[2756]: I0424 00:15:40.114522 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7k8d9" podStartSLOduration=2.114508125 podStartE2EDuration="2.114508125s" podCreationTimestamp="2026-04-24 00:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:15:40.114145112 +0000 UTC m=+9.161777240" watchObservedRunningTime="2026-04-24 00:15:40.114508125 +0000 UTC m=+9.162140253" Apr 24 00:15:40.175522 kubelet[2756]: E0424 00:15:40.175321 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:41.104833 kubelet[2756]: E0424 00:15:41.104795 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:41.106224 kubelet[2756]: E0424 00:15:41.106208 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:41.498763 containerd[1582]: time="2026-04-24T00:15:41.498726063Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:41.500124 containerd[1582]: time="2026-04-24T00:15:41.499999089Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 24 00:15:41.500654 containerd[1582]: time="2026-04-24T00:15:41.500609551Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:41.502729 containerd[1582]: time="2026-04-24T00:15:41.502700403Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:41.503553 containerd[1582]: time="2026-04-24T00:15:41.503516416Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 2.491693389s" Apr 24 00:15:41.503553 containerd[1582]: time="2026-04-24T00:15:41.503545916Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 24 00:15:41.507168 containerd[1582]: time="2026-04-24T00:15:41.507140434Z" level=info msg="CreateContainer within sandbox \"006db4305b5bd816c623f653367048c5ef76511171bc0321cf88130e9301e7dd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 24 00:15:41.511893 containerd[1582]: time="2026-04-24T00:15:41.511867348Z" level=info msg="Container 8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:41.519413 containerd[1582]: time="2026-04-24T00:15:41.519388076Z" level=info msg="CreateContainer within sandbox \"006db4305b5bd816c623f653367048c5ef76511171bc0321cf88130e9301e7dd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99\"" Apr 24 00:15:41.519810 containerd[1582]: time="2026-04-24T00:15:41.519788388Z" level=info msg="StartContainer for \"8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99\"" Apr 24 00:15:41.521467 containerd[1582]: time="2026-04-24T00:15:41.521440486Z" level=info msg="connecting to shim 8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99" address="unix:///run/containerd/s/3ef45ba3e4ea91c5e20a2a00425ce1b12ec87f3e171f3c8cce40a02a73d41841" protocol=ttrpc version=3 Apr 24 00:15:41.548764 systemd[1]: Started cri-containerd-8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99.scope - libcontainer container 8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99. Apr 24 00:15:41.585937 containerd[1582]: time="2026-04-24T00:15:41.585878989Z" level=info msg="StartContainer for \"8da5470cc0efac31242bbbe24a7c3059b5681616e24ac0be17dc027bac240a99\" returns successfully" Apr 24 00:15:41.704383 kubelet[2756]: E0424 00:15:41.704322 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:42.108676 kubelet[2756]: E0424 00:15:42.108087 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:42.108676 kubelet[2756]: E0424 00:15:42.108420 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:42.120814 kubelet[2756]: I0424 00:15:42.120731 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-8458958b4d-9mfrb" podStartSLOduration=1.625137621 podStartE2EDuration="4.120707131s" podCreationTimestamp="2026-04-24 00:15:38 +0000 UTC" firstStartedPulling="2026-04-24 00:15:39.00883067 +0000 UTC m=+8.056462808" lastFinishedPulling="2026-04-24 00:15:41.50440019 +0000 UTC m=+10.552032318" observedRunningTime="2026-04-24 00:15:42.12044489 +0000 UTC m=+11.168077028" watchObservedRunningTime="2026-04-24 00:15:42.120707131 +0000 UTC m=+11.168339259" Apr 24 00:15:42.228696 kubelet[2756]: E0424 00:15:42.228653 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:47.134969 sudo[1822]: pam_unix(sudo:session): session closed for user root Apr 24 00:15:47.238840 sshd[1821]: Connection closed by 20.229.252.112 port 46452 Apr 24 00:15:47.239251 sshd-session[1818]: pam_unix(sshd:session): session closed for user core Apr 24 00:15:47.248136 systemd-logind[1555]: Session 7 logged out. Waiting for processes to exit. Apr 24 00:15:47.249218 systemd[1]: sshd@6-172.234.215.230:22-20.229.252.112:46452.service: Deactivated successfully. Apr 24 00:15:47.257265 systemd[1]: session-7.scope: Deactivated successfully. Apr 24 00:15:47.257615 systemd[1]: session-7.scope: Consumed 5.065s CPU time, 227.1M memory peak. Apr 24 00:15:47.264126 systemd-logind[1555]: Removed session 7. Apr 24 00:15:47.338733 update_engine[1559]: I20260424 00:15:47.338661 1559 update_attempter.cc:509] Updating boot flags... Apr 24 00:15:49.789112 systemd[1]: Created slice kubepods-besteffort-pod9b6fc0d5_5473_4960_a525_4561d1fd0763.slice - libcontainer container kubepods-besteffort-pod9b6fc0d5_5473_4960_a525_4561d1fd0763.slice. Apr 24 00:15:49.873212 kubelet[2756]: I0424 00:15:49.873174 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b6fc0d5-5473-4960-a525-4561d1fd0763-tigera-ca-bundle\") pod \"calico-typha-ff89c859-m252j\" (UID: \"9b6fc0d5-5473-4960-a525-4561d1fd0763\") " pod="calico-system/calico-typha-ff89c859-m252j" Apr 24 00:15:49.873212 kubelet[2756]: I0424 00:15:49.873214 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr5z4\" (UniqueName: \"kubernetes.io/projected/9b6fc0d5-5473-4960-a525-4561d1fd0763-kube-api-access-cr5z4\") pod \"calico-typha-ff89c859-m252j\" (UID: \"9b6fc0d5-5473-4960-a525-4561d1fd0763\") " pod="calico-system/calico-typha-ff89c859-m252j" Apr 24 00:15:49.873212 kubelet[2756]: I0424 00:15:49.873233 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9b6fc0d5-5473-4960-a525-4561d1fd0763-typha-certs\") pod \"calico-typha-ff89c859-m252j\" (UID: \"9b6fc0d5-5473-4960-a525-4561d1fd0763\") " pod="calico-system/calico-typha-ff89c859-m252j" Apr 24 00:15:49.885710 systemd[1]: Created slice kubepods-besteffort-pod02206842_7e75_4023_a6c2_6a5558c9da17.slice - libcontainer container kubepods-besteffort-pod02206842_7e75_4023_a6c2_6a5558c9da17.slice. Apr 24 00:15:49.974536 kubelet[2756]: I0424 00:15:49.974090 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-bpffs\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.975261 kubelet[2756]: I0424 00:15:49.975101 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-cni-log-dir\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.975500 kubelet[2756]: I0424 00:15:49.975345 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-flexvol-driver-host\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.976911 kubelet[2756]: I0424 00:15:49.976294 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02206842-7e75-4023-a6c2-6a5558c9da17-tigera-ca-bundle\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.976911 kubelet[2756]: I0424 00:15:49.976325 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-xtables-lock\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.976911 kubelet[2756]: I0424 00:15:49.976355 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-cni-bin-dir\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.976911 kubelet[2756]: I0424 00:15:49.976372 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-lib-modules\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.976911 kubelet[2756]: I0424 00:15:49.976389 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/02206842-7e75-4023-a6c2-6a5558c9da17-node-certs\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977065 kubelet[2756]: I0424 00:15:49.976408 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rpf8\" (UniqueName: \"kubernetes.io/projected/02206842-7e75-4023-a6c2-6a5558c9da17-kube-api-access-7rpf8\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977065 kubelet[2756]: I0424 00:15:49.976439 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-nodeproc\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977065 kubelet[2756]: I0424 00:15:49.976458 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-cni-net-dir\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977065 kubelet[2756]: I0424 00:15:49.976474 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-var-run-calico\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977065 kubelet[2756]: I0424 00:15:49.976505 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-policysync\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977185 kubelet[2756]: I0424 00:15:49.976519 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-sys-fs\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:49.977185 kubelet[2756]: I0424 00:15:49.976537 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/02206842-7e75-4023-a6c2-6a5558c9da17-var-lib-calico\") pod \"calico-node-68pf6\" (UID: \"02206842-7e75-4023-a6c2-6a5558c9da17\") " pod="calico-system/calico-node-68pf6" Apr 24 00:15:50.003894 kubelet[2756]: E0424 00:15:50.003852 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:15:50.077725 kubelet[2756]: I0424 00:15:50.077005 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/952027fd-84bb-4249-83a5-04c7975a90e5-registration-dir\") pod \"csi-node-driver-k6mmr\" (UID: \"952027fd-84bb-4249-83a5-04c7975a90e5\") " pod="calico-system/csi-node-driver-k6mmr" Apr 24 00:15:50.077725 kubelet[2756]: I0424 00:15:50.077042 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/952027fd-84bb-4249-83a5-04c7975a90e5-socket-dir\") pod \"csi-node-driver-k6mmr\" (UID: \"952027fd-84bb-4249-83a5-04c7975a90e5\") " pod="calico-system/csi-node-driver-k6mmr" Apr 24 00:15:50.077725 kubelet[2756]: I0424 00:15:50.077087 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/952027fd-84bb-4249-83a5-04c7975a90e5-kubelet-dir\") pod \"csi-node-driver-k6mmr\" (UID: \"952027fd-84bb-4249-83a5-04c7975a90e5\") " pod="calico-system/csi-node-driver-k6mmr" Apr 24 00:15:50.077725 kubelet[2756]: I0424 00:15:50.077104 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/952027fd-84bb-4249-83a5-04c7975a90e5-varrun\") pod \"csi-node-driver-k6mmr\" (UID: \"952027fd-84bb-4249-83a5-04c7975a90e5\") " pod="calico-system/csi-node-driver-k6mmr" Apr 24 00:15:50.077725 kubelet[2756]: I0424 00:15:50.077126 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp655\" (UniqueName: \"kubernetes.io/projected/952027fd-84bb-4249-83a5-04c7975a90e5-kube-api-access-bp655\") pod \"csi-node-driver-k6mmr\" (UID: \"952027fd-84bb-4249-83a5-04c7975a90e5\") " pod="calico-system/csi-node-driver-k6mmr" Apr 24 00:15:50.089265 kubelet[2756]: E0424 00:15:50.085291 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.089265 kubelet[2756]: W0424 00:15:50.085328 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.089265 kubelet[2756]: E0424 00:15:50.085456 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.089265 kubelet[2756]: E0424 00:15:50.087014 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.089265 kubelet[2756]: W0424 00:15:50.087026 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.089265 kubelet[2756]: E0424 00:15:50.087038 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.089265 kubelet[2756]: E0424 00:15:50.088517 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.089265 kubelet[2756]: W0424 00:15:50.088656 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.089265 kubelet[2756]: E0424 00:15:50.088675 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.096255 kubelet[2756]: E0424 00:15:50.096228 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:50.097654 containerd[1582]: time="2026-04-24T00:15:50.097604357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff89c859-m252j,Uid:9b6fc0d5-5473-4960-a525-4561d1fd0763,Namespace:calico-system,Attempt:0,}" Apr 24 00:15:50.108300 kubelet[2756]: E0424 00:15:50.107577 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.108300 kubelet[2756]: W0424 00:15:50.107596 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.108300 kubelet[2756]: E0424 00:15:50.107609 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.126647 containerd[1582]: time="2026-04-24T00:15:50.126405900Z" level=info msg="connecting to shim dc0fc05b9af81a8f4ca98b768097b01b1b4e642250c79b9de78aea47ea9a5401" address="unix:///run/containerd/s/102a48fd8ad9f40710afee40bdf4d743ea5ef0325cd433658b0ac8b7091b0896" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:50.162786 systemd[1]: Started cri-containerd-dc0fc05b9af81a8f4ca98b768097b01b1b4e642250c79b9de78aea47ea9a5401.scope - libcontainer container dc0fc05b9af81a8f4ca98b768097b01b1b4e642250c79b9de78aea47ea9a5401. Apr 24 00:15:50.179704 kubelet[2756]: E0424 00:15:50.178818 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.179704 kubelet[2756]: W0424 00:15:50.178843 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.179704 kubelet[2756]: E0424 00:15:50.178883 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.179704 kubelet[2756]: E0424 00:15:50.179205 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.179704 kubelet[2756]: W0424 00:15:50.179215 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.179704 kubelet[2756]: E0424 00:15:50.179238 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.179704 kubelet[2756]: E0424 00:15:50.179549 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.179704 kubelet[2756]: W0424 00:15:50.179576 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.179704 kubelet[2756]: E0424 00:15:50.179586 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.180392 kubelet[2756]: E0424 00:15:50.180299 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.180392 kubelet[2756]: W0424 00:15:50.180329 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.180392 kubelet[2756]: E0424 00:15:50.180364 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.180927 kubelet[2756]: E0424 00:15:50.180915 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.180997 kubelet[2756]: W0424 00:15:50.180983 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.181056 kubelet[2756]: E0424 00:15:50.181044 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.181702 kubelet[2756]: E0424 00:15:50.181651 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.181702 kubelet[2756]: W0424 00:15:50.181677 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.181702 kubelet[2756]: E0424 00:15:50.181688 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.182447 kubelet[2756]: E0424 00:15:50.182409 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.182447 kubelet[2756]: W0424 00:15:50.182420 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.182611 kubelet[2756]: E0424 00:15:50.182535 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.182935 kubelet[2756]: E0424 00:15:50.182903 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.182935 kubelet[2756]: W0424 00:15:50.182914 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.182935 kubelet[2756]: E0424 00:15:50.182923 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.183461 kubelet[2756]: E0424 00:15:50.183430 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.183461 kubelet[2756]: W0424 00:15:50.183442 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.183461 kubelet[2756]: E0424 00:15:50.183450 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.184328 kubelet[2756]: E0424 00:15:50.184295 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.184328 kubelet[2756]: W0424 00:15:50.184305 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.184328 kubelet[2756]: E0424 00:15:50.184315 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.184812 kubelet[2756]: E0424 00:15:50.184780 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.185677 kubelet[2756]: W0424 00:15:50.184863 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.185787 kubelet[2756]: E0424 00:15:50.185753 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.186446 kubelet[2756]: E0424 00:15:50.186389 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.186446 kubelet[2756]: W0424 00:15:50.186405 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.186614 kubelet[2756]: E0424 00:15:50.186418 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.187880 kubelet[2756]: E0424 00:15:50.187858 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.187967 kubelet[2756]: W0424 00:15:50.187930 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.187967 kubelet[2756]: E0424 00:15:50.187949 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.188687 kubelet[2756]: E0424 00:15:50.188558 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.188808 kubelet[2756]: W0424 00:15:50.188778 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.188808 kubelet[2756]: E0424 00:15:50.188794 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.190499 containerd[1582]: time="2026-04-24T00:15:50.189778937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68pf6,Uid:02206842-7e75-4023-a6c2-6a5558c9da17,Namespace:calico-system,Attempt:0,}" Apr 24 00:15:50.190778 kubelet[2756]: E0424 00:15:50.190767 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.190840 kubelet[2756]: W0424 00:15:50.190829 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.190897 kubelet[2756]: E0424 00:15:50.190886 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.192721 kubelet[2756]: E0424 00:15:50.192685 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.192721 kubelet[2756]: W0424 00:15:50.192698 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.192721 kubelet[2756]: E0424 00:15:50.192708 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.193416 kubelet[2756]: E0424 00:15:50.193404 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.193416 kubelet[2756]: W0424 00:15:50.193441 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.193416 kubelet[2756]: E0424 00:15:50.193451 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.194235 kubelet[2756]: E0424 00:15:50.194223 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.194309 kubelet[2756]: W0424 00:15:50.194298 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.194355 kubelet[2756]: E0424 00:15:50.194346 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.194751 kubelet[2756]: E0424 00:15:50.194727 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.194848 kubelet[2756]: W0424 00:15:50.194836 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.194942 kubelet[2756]: E0424 00:15:50.194930 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.195805 kubelet[2756]: E0424 00:15:50.195793 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.195859 kubelet[2756]: W0424 00:15:50.195849 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.195900 kubelet[2756]: E0424 00:15:50.195891 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.196498 kubelet[2756]: E0424 00:15:50.196461 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.196498 kubelet[2756]: W0424 00:15:50.196492 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.196570 kubelet[2756]: E0424 00:15:50.196518 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.197394 kubelet[2756]: E0424 00:15:50.197371 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.197394 kubelet[2756]: W0424 00:15:50.197389 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.197459 kubelet[2756]: E0424 00:15:50.197403 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.197877 kubelet[2756]: E0424 00:15:50.197854 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.197877 kubelet[2756]: W0424 00:15:50.197875 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.197952 kubelet[2756]: E0424 00:15:50.197891 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.198990 kubelet[2756]: E0424 00:15:50.198959 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.198990 kubelet[2756]: W0424 00:15:50.198981 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.199064 kubelet[2756]: E0424 00:15:50.199017 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.199621 kubelet[2756]: E0424 00:15:50.199565 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.199688 kubelet[2756]: W0424 00:15:50.199618 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.199726 kubelet[2756]: E0424 00:15:50.199698 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.215868 kubelet[2756]: E0424 00:15:50.215772 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:50.215868 kubelet[2756]: W0424 00:15:50.215801 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:50.215868 kubelet[2756]: E0424 00:15:50.215820 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:50.224076 containerd[1582]: time="2026-04-24T00:15:50.224019327Z" level=info msg="connecting to shim 899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062" address="unix:///run/containerd/s/677eca5b528e7f2568325fea7cd95d1ad60440869fa6b63476000e5ec98b31c9" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:15:50.246267 containerd[1582]: time="2026-04-24T00:15:50.246213989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-ff89c859-m252j,Uid:9b6fc0d5-5473-4960-a525-4561d1fd0763,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc0fc05b9af81a8f4ca98b768097b01b1b4e642250c79b9de78aea47ea9a5401\"" Apr 24 00:15:50.247682 kubelet[2756]: E0424 00:15:50.246997 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:50.249329 containerd[1582]: time="2026-04-24T00:15:50.249235559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 24 00:15:50.261308 systemd[1]: Started cri-containerd-899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062.scope - libcontainer container 899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062. Apr 24 00:15:50.299241 containerd[1582]: time="2026-04-24T00:15:50.299172311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-68pf6,Uid:02206842-7e75-4023-a6c2-6a5558c9da17,Namespace:calico-system,Attempt:0,} returns sandbox id \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\"" Apr 24 00:15:51.069342 kubelet[2756]: E0424 00:15:51.069049 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:15:51.098830 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2858827273.mount: Deactivated successfully. Apr 24 00:15:51.804496 containerd[1582]: time="2026-04-24T00:15:51.804420994Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:51.805681 containerd[1582]: time="2026-04-24T00:15:51.805494797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=35813139" Apr 24 00:15:51.806388 containerd[1582]: time="2026-04-24T00:15:51.806337810Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:51.809656 containerd[1582]: time="2026-04-24T00:15:51.808745507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:51.809656 containerd[1582]: time="2026-04-24T00:15:51.809536420Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 1.56009156s" Apr 24 00:15:51.809656 containerd[1582]: time="2026-04-24T00:15:51.809563210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 24 00:15:51.811041 containerd[1582]: time="2026-04-24T00:15:51.811003375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 24 00:15:51.833943 containerd[1582]: time="2026-04-24T00:15:51.833852445Z" level=info msg="CreateContainer within sandbox \"dc0fc05b9af81a8f4ca98b768097b01b1b4e642250c79b9de78aea47ea9a5401\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 24 00:15:51.840609 containerd[1582]: time="2026-04-24T00:15:51.840577366Z" level=info msg="Container 22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:51.846235 containerd[1582]: time="2026-04-24T00:15:51.846191484Z" level=info msg="CreateContainer within sandbox \"dc0fc05b9af81a8f4ca98b768097b01b1b4e642250c79b9de78aea47ea9a5401\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f\"" Apr 24 00:15:51.847035 containerd[1582]: time="2026-04-24T00:15:51.846918295Z" level=info msg="StartContainer for \"22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f\"" Apr 24 00:15:51.848654 containerd[1582]: time="2026-04-24T00:15:51.848558331Z" level=info msg="connecting to shim 22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f" address="unix:///run/containerd/s/102a48fd8ad9f40710afee40bdf4d743ea5ef0325cd433658b0ac8b7091b0896" protocol=ttrpc version=3 Apr 24 00:15:51.873788 systemd[1]: Started cri-containerd-22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f.scope - libcontainer container 22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f. Apr 24 00:15:51.941203 containerd[1582]: time="2026-04-24T00:15:51.941079659Z" level=info msg="StartContainer for \"22ea1d855031e7d6b3eb0322a72b49a0c7f716661dc6f3f55a2d45d18bf3687f\" returns successfully" Apr 24 00:15:52.137290 kubelet[2756]: E0424 00:15:52.137107 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:52.183592 kubelet[2756]: E0424 00:15:52.183554 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.183592 kubelet[2756]: W0424 00:15:52.183592 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.183592 kubelet[2756]: E0424 00:15:52.183610 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.184943 kubelet[2756]: E0424 00:15:52.184899 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.184943 kubelet[2756]: W0424 00:15:52.184922 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.184943 kubelet[2756]: E0424 00:15:52.184934 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.185720 kubelet[2756]: E0424 00:15:52.185694 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.185720 kubelet[2756]: W0424 00:15:52.185713 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.185813 kubelet[2756]: E0424 00:15:52.185725 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.186071 kubelet[2756]: E0424 00:15:52.186037 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.186071 kubelet[2756]: W0424 00:15:52.186053 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.186071 kubelet[2756]: E0424 00:15:52.186062 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.186337 kubelet[2756]: E0424 00:15:52.186318 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.186337 kubelet[2756]: W0424 00:15:52.186333 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.186407 kubelet[2756]: E0424 00:15:52.186342 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.187856 kubelet[2756]: E0424 00:15:52.187796 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.187856 kubelet[2756]: W0424 00:15:52.187813 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.187856 kubelet[2756]: E0424 00:15:52.187824 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.188691 kubelet[2756]: E0424 00:15:52.188038 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.188691 kubelet[2756]: W0424 00:15:52.188047 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.188691 kubelet[2756]: E0424 00:15:52.188055 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.188691 kubelet[2756]: E0424 00:15:52.188420 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.188691 kubelet[2756]: W0424 00:15:52.188430 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.188691 kubelet[2756]: E0424 00:15:52.188439 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.188691 kubelet[2756]: E0424 00:15:52.188664 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.188691 kubelet[2756]: W0424 00:15:52.188671 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.188691 kubelet[2756]: E0424 00:15:52.188679 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.188937 kubelet[2756]: E0424 00:15:52.188869 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.188937 kubelet[2756]: W0424 00:15:52.188878 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.188937 kubelet[2756]: E0424 00:15:52.188886 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.189086 kubelet[2756]: E0424 00:15:52.189057 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.189086 kubelet[2756]: W0424 00:15:52.189077 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.189086 kubelet[2756]: E0424 00:15:52.189085 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.189384 kubelet[2756]: E0424 00:15:52.189351 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.189384 kubelet[2756]: W0424 00:15:52.189367 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.189384 kubelet[2756]: E0424 00:15:52.189375 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.189923 kubelet[2756]: E0424 00:15:52.189881 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.189923 kubelet[2756]: W0424 00:15:52.189899 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.189923 kubelet[2756]: E0424 00:15:52.189907 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.190260 kubelet[2756]: E0424 00:15:52.190229 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.190260 kubelet[2756]: W0424 00:15:52.190250 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.190260 kubelet[2756]: E0424 00:15:52.190258 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.191928 kubelet[2756]: E0424 00:15:52.191893 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.191928 kubelet[2756]: W0424 00:15:52.191918 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.191993 kubelet[2756]: E0424 00:15:52.191932 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.199319 kubelet[2756]: E0424 00:15:52.199273 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.199319 kubelet[2756]: W0424 00:15:52.199292 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.199319 kubelet[2756]: E0424 00:15:52.199304 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.199593 kubelet[2756]: E0424 00:15:52.199566 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.199593 kubelet[2756]: W0424 00:15:52.199583 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.199593 kubelet[2756]: E0424 00:15:52.199591 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.199880 kubelet[2756]: E0424 00:15:52.199853 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.199880 kubelet[2756]: W0424 00:15:52.199869 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.199880 kubelet[2756]: E0424 00:15:52.199878 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.200131 kubelet[2756]: E0424 00:15:52.200106 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.200131 kubelet[2756]: W0424 00:15:52.200123 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.200131 kubelet[2756]: E0424 00:15:52.200131 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.200440 kubelet[2756]: E0424 00:15:52.200408 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.200440 kubelet[2756]: W0424 00:15:52.200425 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.200440 kubelet[2756]: E0424 00:15:52.200433 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.200730 kubelet[2756]: E0424 00:15:52.200703 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.200730 kubelet[2756]: W0424 00:15:52.200720 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.200730 kubelet[2756]: E0424 00:15:52.200728 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.200976 kubelet[2756]: E0424 00:15:52.200948 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.200976 kubelet[2756]: W0424 00:15:52.200965 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.200976 kubelet[2756]: E0424 00:15:52.200973 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.201201 kubelet[2756]: E0424 00:15:52.201173 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.201201 kubelet[2756]: W0424 00:15:52.201189 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.201201 kubelet[2756]: E0424 00:15:52.201196 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.201457 kubelet[2756]: E0424 00:15:52.201431 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.201457 kubelet[2756]: W0424 00:15:52.201447 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.201457 kubelet[2756]: E0424 00:15:52.201454 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.202177 kubelet[2756]: E0424 00:15:52.202144 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.202177 kubelet[2756]: W0424 00:15:52.202169 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.202260 kubelet[2756]: E0424 00:15:52.202182 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.202517 kubelet[2756]: E0424 00:15:52.202484 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.202517 kubelet[2756]: W0424 00:15:52.202510 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.202517 kubelet[2756]: E0424 00:15:52.202524 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.203163 kubelet[2756]: E0424 00:15:52.203109 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.203163 kubelet[2756]: W0424 00:15:52.203125 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.203163 kubelet[2756]: E0424 00:15:52.203138 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.203776 kubelet[2756]: E0424 00:15:52.203731 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.203776 kubelet[2756]: W0424 00:15:52.203761 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.203776 kubelet[2756]: E0424 00:15:52.203770 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.204792 kubelet[2756]: E0424 00:15:52.204710 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.204792 kubelet[2756]: W0424 00:15:52.204727 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.204792 kubelet[2756]: E0424 00:15:52.204736 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.205193 kubelet[2756]: E0424 00:15:52.204952 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.205193 kubelet[2756]: W0424 00:15:52.204962 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.205193 kubelet[2756]: E0424 00:15:52.204970 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.205267 kubelet[2756]: E0424 00:15:52.205233 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.205267 kubelet[2756]: W0424 00:15:52.205242 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.205267 kubelet[2756]: E0424 00:15:52.205250 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.205524 kubelet[2756]: E0424 00:15:52.205464 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.205524 kubelet[2756]: W0424 00:15:52.205483 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.205524 kubelet[2756]: E0424 00:15:52.205491 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.206380 kubelet[2756]: E0424 00:15:52.206352 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 24 00:15:52.206380 kubelet[2756]: W0424 00:15:52.206371 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 24 00:15:52.206380 kubelet[2756]: E0424 00:15:52.206380 2756 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 24 00:15:52.587554 containerd[1582]: time="2026-04-24T00:15:52.587469858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:52.588608 containerd[1582]: time="2026-04-24T00:15:52.588467961Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=4601981" Apr 24 00:15:52.588733 containerd[1582]: time="2026-04-24T00:15:52.588548671Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:52.591039 containerd[1582]: time="2026-04-24T00:15:52.591015318Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:52.593042 containerd[1582]: time="2026-04-24T00:15:52.592971794Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 781.587358ms" Apr 24 00:15:52.593042 containerd[1582]: time="2026-04-24T00:15:52.593036044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 24 00:15:52.597984 containerd[1582]: time="2026-04-24T00:15:52.597910069Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 24 00:15:52.607664 containerd[1582]: time="2026-04-24T00:15:52.606913656Z" level=info msg="Container d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:52.627484 containerd[1582]: time="2026-04-24T00:15:52.627402707Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487\"" Apr 24 00:15:52.628849 containerd[1582]: time="2026-04-24T00:15:52.628796671Z" level=info msg="StartContainer for \"d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487\"" Apr 24 00:15:52.636279 containerd[1582]: time="2026-04-24T00:15:52.632308672Z" level=info msg="connecting to shim d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487" address="unix:///run/containerd/s/677eca5b528e7f2568325fea7cd95d1ad60440869fa6b63476000e5ec98b31c9" protocol=ttrpc version=3 Apr 24 00:15:52.663975 systemd[1]: Started cri-containerd-d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487.scope - libcontainer container d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487. Apr 24 00:15:52.764913 containerd[1582]: time="2026-04-24T00:15:52.764844935Z" level=info msg="StartContainer for \"d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487\" returns successfully" Apr 24 00:15:52.783169 systemd[1]: cri-containerd-d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487.scope: Deactivated successfully. Apr 24 00:15:52.788767 containerd[1582]: time="2026-04-24T00:15:52.788724867Z" level=info msg="received container exit event container_id:\"d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487\" id:\"d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487\" pid:3412 exited_at:{seconds:1776989752 nanos:788249286}" Apr 24 00:15:52.818862 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7a23755a0ed272b68f37b7d8133eec13215d4853b1432cd03f8b7363f634487-rootfs.mount: Deactivated successfully. Apr 24 00:15:53.070565 kubelet[2756]: E0424 00:15:53.069914 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:15:53.143461 kubelet[2756]: I0424 00:15:53.143427 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:15:53.144440 kubelet[2756]: E0424 00:15:53.144030 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:15:53.146869 containerd[1582]: time="2026-04-24T00:15:53.146832024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 24 00:15:53.183335 kubelet[2756]: I0424 00:15:53.183249 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-ff89c859-m252j" podStartSLOduration=2.620545229 podStartE2EDuration="4.183230907s" podCreationTimestamp="2026-04-24 00:15:49 +0000 UTC" firstStartedPulling="2026-04-24 00:15:50.247992045 +0000 UTC m=+19.295624173" lastFinishedPulling="2026-04-24 00:15:51.810677723 +0000 UTC m=+20.858309851" observedRunningTime="2026-04-24 00:15:52.160029107 +0000 UTC m=+21.207661235" watchObservedRunningTime="2026-04-24 00:15:53.183230907 +0000 UTC m=+22.230863035" Apr 24 00:15:55.070387 kubelet[2756]: E0424 00:15:55.069603 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:15:57.069373 kubelet[2756]: E0424 00:15:57.069309 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:15:57.339792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217742362.mount: Deactivated successfully. Apr 24 00:15:57.370317 containerd[1582]: time="2026-04-24T00:15:57.370274031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:57.371129 containerd[1582]: time="2026-04-24T00:15:57.370953662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 24 00:15:57.371623 containerd[1582]: time="2026-04-24T00:15:57.371572813Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:57.373147 containerd[1582]: time="2026-04-24T00:15:57.373119627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:15:57.373988 containerd[1582]: time="2026-04-24T00:15:57.373965929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 4.226478473s" Apr 24 00:15:57.374063 containerd[1582]: time="2026-04-24T00:15:57.374049019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 24 00:15:57.377875 containerd[1582]: time="2026-04-24T00:15:57.377847478Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 24 00:15:57.387572 containerd[1582]: time="2026-04-24T00:15:57.386757950Z" level=info msg="Container 456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:15:57.392627 containerd[1582]: time="2026-04-24T00:15:57.392596705Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451\"" Apr 24 00:15:57.393263 containerd[1582]: time="2026-04-24T00:15:57.393233205Z" level=info msg="StartContainer for \"456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451\"" Apr 24 00:15:57.395668 containerd[1582]: time="2026-04-24T00:15:57.395609392Z" level=info msg="connecting to shim 456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451" address="unix:///run/containerd/s/677eca5b528e7f2568325fea7cd95d1ad60440869fa6b63476000e5ec98b31c9" protocol=ttrpc version=3 Apr 24 00:15:57.423777 systemd[1]: Started cri-containerd-456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451.scope - libcontainer container 456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451. Apr 24 00:15:57.490920 containerd[1582]: time="2026-04-24T00:15:57.490812393Z" level=info msg="StartContainer for \"456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451\" returns successfully" Apr 24 00:15:57.541908 systemd[1]: cri-containerd-456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451.scope: Deactivated successfully. Apr 24 00:15:57.545126 containerd[1582]: time="2026-04-24T00:15:57.545096785Z" level=info msg="received container exit event container_id:\"456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451\" id:\"456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451\" pid:3467 exited_at:{seconds:1776989757 nanos:544081902}" Apr 24 00:15:57.575592 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-456eac5ddabb9dbe626d3c6b5b507caab411aefc5fea4152d1d8edfad4c43451-rootfs.mount: Deactivated successfully. Apr 24 00:15:58.157273 containerd[1582]: time="2026-04-24T00:15:58.157236388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 24 00:15:59.068231 kubelet[2756]: E0424 00:15:59.067657 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:16:00.393469 containerd[1582]: time="2026-04-24T00:16:00.393415463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:00.394350 containerd[1582]: time="2026-04-24T00:16:00.394219915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 24 00:16:00.394832 containerd[1582]: time="2026-04-24T00:16:00.394799375Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:00.396882 containerd[1582]: time="2026-04-24T00:16:00.396830560Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:00.397860 containerd[1582]: time="2026-04-24T00:16:00.397830842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 2.240562074s" Apr 24 00:16:00.397940 containerd[1582]: time="2026-04-24T00:16:00.397926652Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 24 00:16:00.404441 containerd[1582]: time="2026-04-24T00:16:00.404388506Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 24 00:16:00.413482 containerd[1582]: time="2026-04-24T00:16:00.412826304Z" level=info msg="Container f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:00.427719 containerd[1582]: time="2026-04-24T00:16:00.427682387Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb\"" Apr 24 00:16:00.429764 containerd[1582]: time="2026-04-24T00:16:00.429698471Z" level=info msg="StartContainer for \"f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb\"" Apr 24 00:16:00.431202 containerd[1582]: time="2026-04-24T00:16:00.431142335Z" level=info msg="connecting to shim f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb" address="unix:///run/containerd/s/677eca5b528e7f2568325fea7cd95d1ad60440869fa6b63476000e5ec98b31c9" protocol=ttrpc version=3 Apr 24 00:16:00.463779 systemd[1]: Started cri-containerd-f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb.scope - libcontainer container f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb. Apr 24 00:16:00.548487 containerd[1582]: time="2026-04-24T00:16:00.548422920Z" level=info msg="StartContainer for \"f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb\" returns successfully" Apr 24 00:16:01.068525 kubelet[2756]: E0424 00:16:01.068093 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k6mmr" podUID="952027fd-84bb-4249-83a5-04c7975a90e5" Apr 24 00:16:01.174470 containerd[1582]: time="2026-04-24T00:16:01.174433071Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 24 00:16:01.179022 systemd[1]: cri-containerd-f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb.scope: Deactivated successfully. Apr 24 00:16:01.179545 systemd[1]: cri-containerd-f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb.scope: Consumed 644ms CPU time, 187.2M memory peak, 2.4M read from disk, 173.7M written to disk. Apr 24 00:16:01.182393 containerd[1582]: time="2026-04-24T00:16:01.182172907Z" level=info msg="received container exit event container_id:\"f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb\" id:\"f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb\" pid:3524 exited_at:{seconds:1776989761 nanos:181619836}" Apr 24 00:16:01.210672 kubelet[2756]: I0424 00:16:01.210644 2756 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 24 00:16:01.282077 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f40845ddf524d6dba03d96d95d36a9734eceed92bd23b7c9c5a020d0d5ea7deb-rootfs.mount: Deactivated successfully. Apr 24 00:16:01.328418 systemd[1]: Created slice kubepods-burstable-pod72be5cc9_be34_45a3_b8c1_d04e0ffcd2d4.slice - libcontainer container kubepods-burstable-pod72be5cc9_be34_45a3_b8c1_d04e0ffcd2d4.slice. Apr 24 00:16:01.362091 systemd[1]: Created slice kubepods-besteffort-pod7e12a673_c35b_4194_9f5f_bfd64649b8e2.slice - libcontainer container kubepods-besteffort-pod7e12a673_c35b_4194_9f5f_bfd64649b8e2.slice. Apr 24 00:16:01.374102 systemd[1]: Created slice kubepods-besteffort-pod931ceb24_9d3b_411d_bab2_39cfe6a8a056.slice - libcontainer container kubepods-besteffort-pod931ceb24_9d3b_411d_bab2_39cfe6a8a056.slice. Apr 24 00:16:01.380988 kubelet[2756]: I0424 00:16:01.380919 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrxt\" (UniqueName: \"kubernetes.io/projected/97df688a-49d4-4441-94d2-8a46a7bf5835-kube-api-access-bcrxt\") pod \"calico-kube-controllers-8949555b5-f5zck\" (UID: \"97df688a-49d4-4441-94d2-8a46a7bf5835\") " pod="calico-system/calico-kube-controllers-8949555b5-f5zck" Apr 24 00:16:01.381652 kubelet[2756]: I0424 00:16:01.381456 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0feedfd4-cd8a-4596-9bcc-87eb0aa67f44-goldmane-key-pair\") pod \"goldmane-57885fdd4c-qjw6h\" (UID: \"0feedfd4-cd8a-4596-9bcc-87eb0aa67f44\") " pod="calico-system/goldmane-57885fdd4c-qjw6h" Apr 24 00:16:01.381652 kubelet[2756]: I0424 00:16:01.381488 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45gdp\" (UniqueName: \"kubernetes.io/projected/0feedfd4-cd8a-4596-9bcc-87eb0aa67f44-kube-api-access-45gdp\") pod \"goldmane-57885fdd4c-qjw6h\" (UID: \"0feedfd4-cd8a-4596-9bcc-87eb0aa67f44\") " pod="calico-system/goldmane-57885fdd4c-qjw6h" Apr 24 00:16:01.381652 kubelet[2756]: I0424 00:16:01.381508 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-ca-bundle\") pod \"whisker-8dcd4b75b-mdtnc\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " pod="calico-system/whisker-8dcd4b75b-mdtnc" Apr 24 00:16:01.381652 kubelet[2756]: I0424 00:16:01.381612 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4-config-volume\") pod \"coredns-674b8bbfcf-smrjm\" (UID: \"72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4\") " pod="kube-system/coredns-674b8bbfcf-smrjm" Apr 24 00:16:01.381966 kubelet[2756]: I0424 00:16:01.381950 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjxg5\" (UniqueName: \"kubernetes.io/projected/72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4-kube-api-access-xjxg5\") pod \"coredns-674b8bbfcf-smrjm\" (UID: \"72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4\") " pod="kube-system/coredns-674b8bbfcf-smrjm" Apr 24 00:16:01.382193 kubelet[2756]: I0424 00:16:01.382178 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0f5c8edb-698c-49fe-9bda-1ab98ebf3b73-calico-apiserver-certs\") pod \"calico-apiserver-777b497fdb-pfklg\" (UID: \"0f5c8edb-698c-49fe-9bda-1ab98ebf3b73\") " pod="calico-system/calico-apiserver-777b497fdb-pfklg" Apr 24 00:16:01.382293 kubelet[2756]: I0424 00:16:01.382279 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swnhw\" (UniqueName: \"kubernetes.io/projected/d2ade395-3c55-4406-a8dd-0bc70a4e5f7d-kube-api-access-swnhw\") pod \"coredns-674b8bbfcf-67j2v\" (UID: \"d2ade395-3c55-4406-a8dd-0bc70a4e5f7d\") " pod="kube-system/coredns-674b8bbfcf-67j2v" Apr 24 00:16:01.382667 kubelet[2756]: I0424 00:16:01.382624 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97df688a-49d4-4441-94d2-8a46a7bf5835-tigera-ca-bundle\") pod \"calico-kube-controllers-8949555b5-f5zck\" (UID: \"97df688a-49d4-4441-94d2-8a46a7bf5835\") " pod="calico-system/calico-kube-controllers-8949555b5-f5zck" Apr 24 00:16:01.382747 kubelet[2756]: I0424 00:16:01.382734 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e12a673-c35b-4194-9f5f-bfd64649b8e2-calico-apiserver-certs\") pod \"calico-apiserver-777b497fdb-2l7vc\" (UID: \"7e12a673-c35b-4194-9f5f-bfd64649b8e2\") " pod="calico-system/calico-apiserver-777b497fdb-2l7vc" Apr 24 00:16:01.382809 kubelet[2756]: I0424 00:16:01.382797 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qklj4\" (UniqueName: \"kubernetes.io/projected/931ceb24-9d3b-411d-bab2-39cfe6a8a056-kube-api-access-qklj4\") pod \"whisker-8dcd4b75b-mdtnc\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " pod="calico-system/whisker-8dcd4b75b-mdtnc" Apr 24 00:16:01.382882 kubelet[2756]: I0424 00:16:01.382868 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-backend-key-pair\") pod \"whisker-8dcd4b75b-mdtnc\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " pod="calico-system/whisker-8dcd4b75b-mdtnc" Apr 24 00:16:01.385650 kubelet[2756]: I0424 00:16:01.383466 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2ade395-3c55-4406-a8dd-0bc70a4e5f7d-config-volume\") pod \"coredns-674b8bbfcf-67j2v\" (UID: \"d2ade395-3c55-4406-a8dd-0bc70a4e5f7d\") " pod="kube-system/coredns-674b8bbfcf-67j2v" Apr 24 00:16:01.385650 kubelet[2756]: I0424 00:16:01.383488 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gssfd\" (UniqueName: \"kubernetes.io/projected/7e12a673-c35b-4194-9f5f-bfd64649b8e2-kube-api-access-gssfd\") pod \"calico-apiserver-777b497fdb-2l7vc\" (UID: \"7e12a673-c35b-4194-9f5f-bfd64649b8e2\") " pod="calico-system/calico-apiserver-777b497fdb-2l7vc" Apr 24 00:16:01.385650 kubelet[2756]: I0424 00:16:01.383503 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0feedfd4-cd8a-4596-9bcc-87eb0aa67f44-config\") pod \"goldmane-57885fdd4c-qjw6h\" (UID: \"0feedfd4-cd8a-4596-9bcc-87eb0aa67f44\") " pod="calico-system/goldmane-57885fdd4c-qjw6h" Apr 24 00:16:01.385650 kubelet[2756]: I0424 00:16:01.383526 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl49w\" (UniqueName: \"kubernetes.io/projected/0f5c8edb-698c-49fe-9bda-1ab98ebf3b73-kube-api-access-tl49w\") pod \"calico-apiserver-777b497fdb-pfklg\" (UID: \"0f5c8edb-698c-49fe-9bda-1ab98ebf3b73\") " pod="calico-system/calico-apiserver-777b497fdb-pfklg" Apr 24 00:16:01.385650 kubelet[2756]: I0424 00:16:01.383541 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0feedfd4-cd8a-4596-9bcc-87eb0aa67f44-goldmane-ca-bundle\") pod \"goldmane-57885fdd4c-qjw6h\" (UID: \"0feedfd4-cd8a-4596-9bcc-87eb0aa67f44\") " pod="calico-system/goldmane-57885fdd4c-qjw6h" Apr 24 00:16:01.385794 kubelet[2756]: I0424 00:16:01.383555 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-nginx-config\") pod \"whisker-8dcd4b75b-mdtnc\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " pod="calico-system/whisker-8dcd4b75b-mdtnc" Apr 24 00:16:01.390445 systemd[1]: Created slice kubepods-besteffort-pod0f5c8edb_698c_49fe_9bda_1ab98ebf3b73.slice - libcontainer container kubepods-besteffort-pod0f5c8edb_698c_49fe_9bda_1ab98ebf3b73.slice. Apr 24 00:16:01.402140 systemd[1]: Created slice kubepods-burstable-podd2ade395_3c55_4406_a8dd_0bc70a4e5f7d.slice - libcontainer container kubepods-burstable-podd2ade395_3c55_4406_a8dd_0bc70a4e5f7d.slice. Apr 24 00:16:01.415264 systemd[1]: Created slice kubepods-besteffort-pod0feedfd4_cd8a_4596_9bcc_87eb0aa67f44.slice - libcontainer container kubepods-besteffort-pod0feedfd4_cd8a_4596_9bcc_87eb0aa67f44.slice. Apr 24 00:16:01.425521 systemd[1]: Created slice kubepods-besteffort-pod97df688a_49d4_4441_94d2_8a46a7bf5835.slice - libcontainer container kubepods-besteffort-pod97df688a_49d4_4441_94d2_8a46a7bf5835.slice. Apr 24 00:16:01.640287 kubelet[2756]: E0424 00:16:01.639496 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:01.642235 containerd[1582]: time="2026-04-24T00:16:01.642188544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smrjm,Uid:72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:01.671643 containerd[1582]: time="2026-04-24T00:16:01.671580177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-2l7vc,Uid:7e12a673-c35b-4194-9f5f-bfd64649b8e2,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:01.687603 containerd[1582]: time="2026-04-24T00:16:01.687423470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dcd4b75b-mdtnc,Uid:931ceb24-9d3b-411d-bab2-39cfe6a8a056,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:01.699154 containerd[1582]: time="2026-04-24T00:16:01.699061444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-pfklg,Uid:0f5c8edb-698c-49fe-9bda-1ab98ebf3b73,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:01.707496 kubelet[2756]: E0424 00:16:01.707424 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:01.712056 containerd[1582]: time="2026-04-24T00:16:01.711996021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-67j2v,Uid:d2ade395-3c55-4406-a8dd-0bc70a4e5f7d,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:01.723158 containerd[1582]: time="2026-04-24T00:16:01.723116495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-qjw6h,Uid:0feedfd4-cd8a-4596-9bcc-87eb0aa67f44,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:01.739584 containerd[1582]: time="2026-04-24T00:16:01.739550150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8949555b5-f5zck,Uid:97df688a-49d4-4441-94d2-8a46a7bf5835,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:01.886207 containerd[1582]: time="2026-04-24T00:16:01.886155498Z" level=error msg="Failed to destroy network for sandbox \"245b7e966744838e2a73cbbc3548ceb41facc7a553f999e7f145cbed66b4df93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.888536 containerd[1582]: time="2026-04-24T00:16:01.888470153Z" level=error msg="Failed to destroy network for sandbox \"2bfdc007d72c377742f3e0d63ca1b52c551f85ca6a4d4b3d1a3ba422b6c51c4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.889778 containerd[1582]: time="2026-04-24T00:16:01.889736936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-2l7vc,Uid:7e12a673-c35b-4194-9f5f-bfd64649b8e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"245b7e966744838e2a73cbbc3548ceb41facc7a553f999e7f145cbed66b4df93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.890413 kubelet[2756]: E0424 00:16:01.890251 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245b7e966744838e2a73cbbc3548ceb41facc7a553f999e7f145cbed66b4df93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.890413 kubelet[2756]: E0424 00:16:01.890333 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245b7e966744838e2a73cbbc3548ceb41facc7a553f999e7f145cbed66b4df93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-777b497fdb-2l7vc" Apr 24 00:16:01.890413 kubelet[2756]: E0424 00:16:01.890362 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245b7e966744838e2a73cbbc3548ceb41facc7a553f999e7f145cbed66b4df93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-777b497fdb-2l7vc" Apr 24 00:16:01.890544 kubelet[2756]: E0424 00:16:01.890419 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-777b497fdb-2l7vc_calico-system(7e12a673-c35b-4194-9f5f-bfd64649b8e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-777b497fdb-2l7vc_calico-system(7e12a673-c35b-4194-9f5f-bfd64649b8e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"245b7e966744838e2a73cbbc3548ceb41facc7a553f999e7f145cbed66b4df93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-777b497fdb-2l7vc" podUID="7e12a673-c35b-4194-9f5f-bfd64649b8e2" Apr 24 00:16:01.894100 containerd[1582]: time="2026-04-24T00:16:01.893971874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smrjm,Uid:72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfdc007d72c377742f3e0d63ca1b52c551f85ca6a4d4b3d1a3ba422b6c51c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.894190 kubelet[2756]: E0424 00:16:01.894170 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfdc007d72c377742f3e0d63ca1b52c551f85ca6a4d4b3d1a3ba422b6c51c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.894237 kubelet[2756]: E0424 00:16:01.894208 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfdc007d72c377742f3e0d63ca1b52c551f85ca6a4d4b3d1a3ba422b6c51c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-smrjm" Apr 24 00:16:01.894237 kubelet[2756]: E0424 00:16:01.894227 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bfdc007d72c377742f3e0d63ca1b52c551f85ca6a4d4b3d1a3ba422b6c51c4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-smrjm" Apr 24 00:16:01.894444 kubelet[2756]: E0424 00:16:01.894268 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-smrjm_kube-system(72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-smrjm_kube-system(72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bfdc007d72c377742f3e0d63ca1b52c551f85ca6a4d4b3d1a3ba422b6c51c4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-smrjm" podUID="72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4" Apr 24 00:16:01.922910 containerd[1582]: time="2026-04-24T00:16:01.922862805Z" level=error msg="Failed to destroy network for sandbox \"4e2de34a3941bb97f406615374dd345c5b0dd6aea80787167bc66a649fd8a7ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.923306 containerd[1582]: time="2026-04-24T00:16:01.923235016Z" level=error msg="Failed to destroy network for sandbox \"aa053ff69c0a1f8e61a3fa186823af1058b24f0b614bab526b5806c27d73c776\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.926519 containerd[1582]: time="2026-04-24T00:16:01.926468693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dcd4b75b-mdtnc,Uid:931ceb24-9d3b-411d-bab2-39cfe6a8a056,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e2de34a3941bb97f406615374dd345c5b0dd6aea80787167bc66a649fd8a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.927028 kubelet[2756]: E0424 00:16:01.926988 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e2de34a3941bb97f406615374dd345c5b0dd6aea80787167bc66a649fd8a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.927091 kubelet[2756]: E0424 00:16:01.927051 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e2de34a3941bb97f406615374dd345c5b0dd6aea80787167bc66a649fd8a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8dcd4b75b-mdtnc" Apr 24 00:16:01.927091 kubelet[2756]: E0424 00:16:01.927076 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e2de34a3941bb97f406615374dd345c5b0dd6aea80787167bc66a649fd8a7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8dcd4b75b-mdtnc" Apr 24 00:16:01.927392 kubelet[2756]: E0424 00:16:01.927143 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8dcd4b75b-mdtnc_calico-system(931ceb24-9d3b-411d-bab2-39cfe6a8a056)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8dcd4b75b-mdtnc_calico-system(931ceb24-9d3b-411d-bab2-39cfe6a8a056)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e2de34a3941bb97f406615374dd345c5b0dd6aea80787167bc66a649fd8a7ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8dcd4b75b-mdtnc" podUID="931ceb24-9d3b-411d-bab2-39cfe6a8a056" Apr 24 00:16:01.928326 containerd[1582]: time="2026-04-24T00:16:01.928226356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-pfklg,Uid:0f5c8edb-698c-49fe-9bda-1ab98ebf3b73,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa053ff69c0a1f8e61a3fa186823af1058b24f0b614bab526b5806c27d73c776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.928819 kubelet[2756]: E0424 00:16:01.928723 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa053ff69c0a1f8e61a3fa186823af1058b24f0b614bab526b5806c27d73c776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.928819 kubelet[2756]: E0424 00:16:01.928766 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa053ff69c0a1f8e61a3fa186823af1058b24f0b614bab526b5806c27d73c776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-777b497fdb-pfklg" Apr 24 00:16:01.928819 kubelet[2756]: E0424 00:16:01.928785 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa053ff69c0a1f8e61a3fa186823af1058b24f0b614bab526b5806c27d73c776\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-777b497fdb-pfklg" Apr 24 00:16:01.929089 kubelet[2756]: E0424 00:16:01.928831 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-777b497fdb-pfklg_calico-system(0f5c8edb-698c-49fe-9bda-1ab98ebf3b73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-777b497fdb-pfklg_calico-system(0f5c8edb-698c-49fe-9bda-1ab98ebf3b73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa053ff69c0a1f8e61a3fa186823af1058b24f0b614bab526b5806c27d73c776\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-777b497fdb-pfklg" podUID="0f5c8edb-698c-49fe-9bda-1ab98ebf3b73" Apr 24 00:16:01.937900 containerd[1582]: time="2026-04-24T00:16:01.937867557Z" level=error msg="Failed to destroy network for sandbox \"c9117a5c2b0531023fe399e1c088ac9e642f422a244d7f234df14bb0dce14c65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.939160 containerd[1582]: time="2026-04-24T00:16:01.939043409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-67j2v,Uid:d2ade395-3c55-4406-a8dd-0bc70a4e5f7d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9117a5c2b0531023fe399e1c088ac9e642f422a244d7f234df14bb0dce14c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.939333 kubelet[2756]: E0424 00:16:01.939253 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9117a5c2b0531023fe399e1c088ac9e642f422a244d7f234df14bb0dce14c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.939333 kubelet[2756]: E0424 00:16:01.939321 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9117a5c2b0531023fe399e1c088ac9e642f422a244d7f234df14bb0dce14c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-67j2v" Apr 24 00:16:01.939418 kubelet[2756]: E0424 00:16:01.939342 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9117a5c2b0531023fe399e1c088ac9e642f422a244d7f234df14bb0dce14c65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-67j2v" Apr 24 00:16:01.941019 kubelet[2756]: E0424 00:16:01.939415 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-67j2v_kube-system(d2ade395-3c55-4406-a8dd-0bc70a4e5f7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-67j2v_kube-system(d2ade395-3c55-4406-a8dd-0bc70a4e5f7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9117a5c2b0531023fe399e1c088ac9e642f422a244d7f234df14bb0dce14c65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-67j2v" podUID="d2ade395-3c55-4406-a8dd-0bc70a4e5f7d" Apr 24 00:16:01.942695 containerd[1582]: time="2026-04-24T00:16:01.942606377Z" level=error msg="Failed to destroy network for sandbox \"6e6dfe6683277d08ccb4bf050a196db76269a336fba86e900fa404cc5bda95b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.944127 containerd[1582]: time="2026-04-24T00:16:01.944084930Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-qjw6h,Uid:0feedfd4-cd8a-4596-9bcc-87eb0aa67f44,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6dfe6683277d08ccb4bf050a196db76269a336fba86e900fa404cc5bda95b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.944294 kubelet[2756]: E0424 00:16:01.944264 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6dfe6683277d08ccb4bf050a196db76269a336fba86e900fa404cc5bda95b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.944434 kubelet[2756]: E0424 00:16:01.944314 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6dfe6683277d08ccb4bf050a196db76269a336fba86e900fa404cc5bda95b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-qjw6h" Apr 24 00:16:01.944434 kubelet[2756]: E0424 00:16:01.944344 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e6dfe6683277d08ccb4bf050a196db76269a336fba86e900fa404cc5bda95b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-57885fdd4c-qjw6h" Apr 24 00:16:01.944434 kubelet[2756]: E0424 00:16:01.944395 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-57885fdd4c-qjw6h_calico-system(0feedfd4-cd8a-4596-9bcc-87eb0aa67f44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-57885fdd4c-qjw6h_calico-system(0feedfd4-cd8a-4596-9bcc-87eb0aa67f44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e6dfe6683277d08ccb4bf050a196db76269a336fba86e900fa404cc5bda95b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-57885fdd4c-qjw6h" podUID="0feedfd4-cd8a-4596-9bcc-87eb0aa67f44" Apr 24 00:16:01.949888 containerd[1582]: time="2026-04-24T00:16:01.949849572Z" level=error msg="Failed to destroy network for sandbox \"97e681288d77dccf85800e37d04d84e5b617f709b717126e1c513ad4ef88f671\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.950759 containerd[1582]: time="2026-04-24T00:16:01.950728434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8949555b5-f5zck,Uid:97df688a-49d4-4441-94d2-8a46a7bf5835,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e681288d77dccf85800e37d04d84e5b617f709b717126e1c513ad4ef88f671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.950941 kubelet[2756]: E0424 00:16:01.950883 2756 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e681288d77dccf85800e37d04d84e5b617f709b717126e1c513ad4ef88f671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 24 00:16:01.950941 kubelet[2756]: E0424 00:16:01.950924 2756 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e681288d77dccf85800e37d04d84e5b617f709b717126e1c513ad4ef88f671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8949555b5-f5zck" Apr 24 00:16:01.951158 kubelet[2756]: E0424 00:16:01.950942 2756 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97e681288d77dccf85800e37d04d84e5b617f709b717126e1c513ad4ef88f671\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8949555b5-f5zck" Apr 24 00:16:01.951158 kubelet[2756]: E0424 00:16:01.950992 2756 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8949555b5-f5zck_calico-system(97df688a-49d4-4441-94d2-8a46a7bf5835)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8949555b5-f5zck_calico-system(97df688a-49d4-4441-94d2-8a46a7bf5835)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97e681288d77dccf85800e37d04d84e5b617f709b717126e1c513ad4ef88f671\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8949555b5-f5zck" podUID="97df688a-49d4-4441-94d2-8a46a7bf5835" Apr 24 00:16:02.173150 kubelet[2756]: I0424 00:16:02.172053 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:16:02.174958 kubelet[2756]: E0424 00:16:02.174843 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:02.201554 containerd[1582]: time="2026-04-24T00:16:02.201492648Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 24 00:16:02.216670 containerd[1582]: time="2026-04-24T00:16:02.216574989Z" level=info msg="Container e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:02.233937 containerd[1582]: time="2026-04-24T00:16:02.233880214Z" level=info msg="CreateContainer within sandbox \"899362e2770ddd21e4d1503e70f666d6e22ed2d50942f7f750d55a8e745c6062\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5\"" Apr 24 00:16:02.234896 containerd[1582]: time="2026-04-24T00:16:02.234706465Z" level=info msg="StartContainer for \"e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5\"" Apr 24 00:16:02.236188 containerd[1582]: time="2026-04-24T00:16:02.236153198Z" level=info msg="connecting to shim e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5" address="unix:///run/containerd/s/677eca5b528e7f2568325fea7cd95d1ad60440869fa6b63476000e5ec98b31c9" protocol=ttrpc version=3 Apr 24 00:16:02.262835 systemd[1]: Started cri-containerd-e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5.scope - libcontainer container e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5. Apr 24 00:16:02.356352 containerd[1582]: time="2026-04-24T00:16:02.354714320Z" level=info msg="StartContainer for \"e0bfd59789ddf29cd032be71936a8dd6cd6d9b050ace2d09af34c9566489c9d5\" returns successfully" Apr 24 00:16:02.503333 systemd[1]: run-netns-cni\x2d2e8c8536\x2dc9cc\x2da3fd\x2d86e3\x2d69d25e703480.mount: Deactivated successfully. Apr 24 00:16:02.597095 kubelet[2756]: I0424 00:16:02.597016 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qklj4\" (UniqueName: \"kubernetes.io/projected/931ceb24-9d3b-411d-bab2-39cfe6a8a056-kube-api-access-qklj4\") pod \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " Apr 24 00:16:02.597434 kubelet[2756]: I0424 00:16:02.597354 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-backend-key-pair\") pod \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " Apr 24 00:16:02.598435 kubelet[2756]: I0424 00:16:02.597559 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-nginx-config\") pod \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " Apr 24 00:16:02.598435 kubelet[2756]: I0424 00:16:02.597593 2756 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-ca-bundle\") pod \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\" (UID: \"931ceb24-9d3b-411d-bab2-39cfe6a8a056\") " Apr 24 00:16:02.598679 kubelet[2756]: I0424 00:16:02.598653 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "931ceb24-9d3b-411d-bab2-39cfe6a8a056" (UID: "931ceb24-9d3b-411d-bab2-39cfe6a8a056"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:16:02.598985 kubelet[2756]: I0424 00:16:02.598946 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "931ceb24-9d3b-411d-bab2-39cfe6a8a056" (UID: "931ceb24-9d3b-411d-bab2-39cfe6a8a056"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 24 00:16:02.608295 kubelet[2756]: I0424 00:16:02.608041 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "931ceb24-9d3b-411d-bab2-39cfe6a8a056" (UID: "931ceb24-9d3b-411d-bab2-39cfe6a8a056"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 24 00:16:02.608295 kubelet[2756]: I0424 00:16:02.608048 2756 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/931ceb24-9d3b-411d-bab2-39cfe6a8a056-kube-api-access-qklj4" (OuterVolumeSpecName: "kube-api-access-qklj4") pod "931ceb24-9d3b-411d-bab2-39cfe6a8a056" (UID: "931ceb24-9d3b-411d-bab2-39cfe6a8a056"). InnerVolumeSpecName "kube-api-access-qklj4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 24 00:16:02.610378 systemd[1]: var-lib-kubelet-pods-931ceb24\x2d9d3b\x2d411d\x2dbab2\x2d39cfe6a8a056-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqklj4.mount: Deactivated successfully. Apr 24 00:16:02.610665 systemd[1]: var-lib-kubelet-pods-931ceb24\x2d9d3b\x2d411d\x2dbab2\x2d39cfe6a8a056-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 24 00:16:02.698822 kubelet[2756]: I0424 00:16:02.698771 2756 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qklj4\" (UniqueName: \"kubernetes.io/projected/931ceb24-9d3b-411d-bab2-39cfe6a8a056-kube-api-access-qklj4\") on node \"172-234-215-230\" DevicePath \"\"" Apr 24 00:16:02.698822 kubelet[2756]: I0424 00:16:02.698812 2756 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-backend-key-pair\") on node \"172-234-215-230\" DevicePath \"\"" Apr 24 00:16:02.698822 kubelet[2756]: I0424 00:16:02.698825 2756 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-nginx-config\") on node \"172-234-215-230\" DevicePath \"\"" Apr 24 00:16:02.698822 kubelet[2756]: I0424 00:16:02.698838 2756 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/931ceb24-9d3b-411d-bab2-39cfe6a8a056-whisker-ca-bundle\") on node \"172-234-215-230\" DevicePath \"\"" Apr 24 00:16:03.076677 systemd[1]: Created slice kubepods-besteffort-pod952027fd_84bb_4249_83a5_04c7975a90e5.slice - libcontainer container kubepods-besteffort-pod952027fd_84bb_4249_83a5_04c7975a90e5.slice. Apr 24 00:16:03.080047 systemd[1]: Removed slice kubepods-besteffort-pod931ceb24_9d3b_411d_bab2_39cfe6a8a056.slice - libcontainer container kubepods-besteffort-pod931ceb24_9d3b_411d_bab2_39cfe6a8a056.slice. Apr 24 00:16:03.082782 containerd[1582]: time="2026-04-24T00:16:03.082739536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6mmr,Uid:952027fd-84bb-4249-83a5-04c7975a90e5,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:03.201292 kubelet[2756]: E0424 00:16:03.198778 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:03.235159 kubelet[2756]: I0424 00:16:03.234126 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-68pf6" podStartSLOduration=4.135676954 podStartE2EDuration="14.234098524s" podCreationTimestamp="2026-04-24 00:15:49 +0000 UTC" firstStartedPulling="2026-04-24 00:15:50.300607725 +0000 UTC m=+19.348239853" lastFinishedPulling="2026-04-24 00:16:00.399029295 +0000 UTC m=+29.446661423" observedRunningTime="2026-04-24 00:16:03.231753209 +0000 UTC m=+32.279385347" watchObservedRunningTime="2026-04-24 00:16:03.234098524 +0000 UTC m=+32.281730652" Apr 24 00:16:03.245203 systemd-networkd[1428]: calif03287d48a5: Link UP Apr 24 00:16:03.246935 systemd-networkd[1428]: calif03287d48a5: Gained carrier Apr 24 00:16:03.278359 containerd[1582]: 2026-04-24 00:16:03.115 [ERROR][3808] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:16:03.278359 containerd[1582]: 2026-04-24 00:16:03.135 [INFO][3808] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-csi--node--driver--k6mmr-eth0 csi-node-driver- calico-system 952027fd-84bb-4249-83a5-04c7975a90e5 714 0 2026-04-24 00:15:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:74865c565 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-215-230 csi-node-driver-k6mmr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif03287d48a5 [] [] }} ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-" Apr 24 00:16:03.278359 containerd[1582]: 2026-04-24 00:16:03.135 [INFO][3808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.278359 containerd[1582]: 2026-04-24 00:16:03.164 [INFO][3821] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" HandleID="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Workload="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.171 [INFO][3821] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" HandleID="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Workload="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-215-230", "pod":"csi-node-driver-k6mmr", "timestamp":"2026-04-24 00:16:03.164302116 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00059a160)} Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.171 [INFO][3821] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.171 [INFO][3821] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.171 [INFO][3821] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.175 [INFO][3821] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" host="172-234-215-230" Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.179 [INFO][3821] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.184 [INFO][3821] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.189 [INFO][3821] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:03.279402 containerd[1582]: 2026-04-24 00:16:03.192 [INFO][3821] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.192 [INFO][3821] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" host="172-234-215-230" Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.194 [INFO][3821] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68 Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.202 [INFO][3821] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" host="172-234-215-230" Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.213 [INFO][3821] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.193/26] block=192.168.20.192/26 handle="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" host="172-234-215-230" Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.213 [INFO][3821] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.193/26] handle="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" host="172-234-215-230" Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.213 [INFO][3821] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:03.282273 containerd[1582]: 2026-04-24 00:16:03.213 [INFO][3821] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.193/26] IPv6=[] ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" HandleID="k8s-pod-network.4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Workload="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.282486 containerd[1582]: 2026-04-24 00:16:03.223 [INFO][3808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-csi--node--driver--k6mmr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"952027fd-84bb-4249-83a5-04c7975a90e5", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"csi-node-driver-k6mmr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif03287d48a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:03.282580 containerd[1582]: 2026-04-24 00:16:03.223 [INFO][3808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.193/32] ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.282580 containerd[1582]: 2026-04-24 00:16:03.223 [INFO][3808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif03287d48a5 ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.282580 containerd[1582]: 2026-04-24 00:16:03.246 [INFO][3808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.284046 containerd[1582]: 2026-04-24 00:16:03.249 [INFO][3808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-csi--node--driver--k6mmr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"952027fd-84bb-4249-83a5-04c7975a90e5", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"74865c565", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68", Pod:"csi-node-driver-k6mmr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif03287d48a5", MAC:"86:8c:c9:d7:25:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:03.284226 containerd[1582]: 2026-04-24 00:16:03.259 [INFO][3808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" Namespace="calico-system" Pod="csi-node-driver-k6mmr" WorkloadEndpoint="172--234--215--230-k8s-csi--node--driver--k6mmr-eth0" Apr 24 00:16:03.356792 systemd[1]: Created slice kubepods-besteffort-pod9fd10aca_970d_441f_a3a5_ae0df4b0e09e.slice - libcontainer container kubepods-besteffort-pod9fd10aca_970d_441f_a3a5_ae0df4b0e09e.slice. Apr 24 00:16:03.363890 containerd[1582]: time="2026-04-24T00:16:03.363849250Z" level=info msg="connecting to shim 4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68" address="unix:///run/containerd/s/a2199e9a63b24ff583753d25f6988cbbbcea8ba9aab384b4cc278b6a16caff82" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:03.395787 systemd[1]: Started cri-containerd-4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68.scope - libcontainer container 4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68. Apr 24 00:16:03.407196 kubelet[2756]: I0424 00:16:03.407091 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmc4w\" (UniqueName: \"kubernetes.io/projected/9fd10aca-970d-441f-a3a5-ae0df4b0e09e-kube-api-access-pmc4w\") pod \"whisker-5449f96cf4-8kk9b\" (UID: \"9fd10aca-970d-441f-a3a5-ae0df4b0e09e\") " pod="calico-system/whisker-5449f96cf4-8kk9b" Apr 24 00:16:03.407328 kubelet[2756]: I0424 00:16:03.407264 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9fd10aca-970d-441f-a3a5-ae0df4b0e09e-whisker-ca-bundle\") pod \"whisker-5449f96cf4-8kk9b\" (UID: \"9fd10aca-970d-441f-a3a5-ae0df4b0e09e\") " pod="calico-system/whisker-5449f96cf4-8kk9b" Apr 24 00:16:03.407328 kubelet[2756]: I0424 00:16:03.407301 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9fd10aca-970d-441f-a3a5-ae0df4b0e09e-whisker-backend-key-pair\") pod \"whisker-5449f96cf4-8kk9b\" (UID: \"9fd10aca-970d-441f-a3a5-ae0df4b0e09e\") " pod="calico-system/whisker-5449f96cf4-8kk9b" Apr 24 00:16:03.407422 kubelet[2756]: I0424 00:16:03.407333 2756 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9fd10aca-970d-441f-a3a5-ae0df4b0e09e-nginx-config\") pod \"whisker-5449f96cf4-8kk9b\" (UID: \"9fd10aca-970d-441f-a3a5-ae0df4b0e09e\") " pod="calico-system/whisker-5449f96cf4-8kk9b" Apr 24 00:16:03.429230 containerd[1582]: time="2026-04-24T00:16:03.429188188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k6mmr,Uid:952027fd-84bb-4249-83a5-04c7975a90e5,Namespace:calico-system,Attempt:0,} returns sandbox id \"4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68\"" Apr 24 00:16:03.431340 containerd[1582]: time="2026-04-24T00:16:03.431317972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 24 00:16:03.663847 containerd[1582]: time="2026-04-24T00:16:03.663705931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5449f96cf4-8kk9b,Uid:9fd10aca-970d-441f-a3a5-ae0df4b0e09e,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:03.800830 systemd-networkd[1428]: calicca85d22a30: Link UP Apr 24 00:16:03.802961 systemd-networkd[1428]: calicca85d22a30: Gained carrier Apr 24 00:16:03.826604 containerd[1582]: 2026-04-24 00:16:03.693 [ERROR][3885] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 24 00:16:03.826604 containerd[1582]: 2026-04-24 00:16:03.704 [INFO][3885] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0 whisker-5449f96cf4- calico-system 9fd10aca-970d-441f-a3a5-ae0df4b0e09e 909 0 2026-04-24 00:16:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5449f96cf4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s 172-234-215-230 whisker-5449f96cf4-8kk9b eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicca85d22a30 [] [] }} ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-" Apr 24 00:16:03.826604 containerd[1582]: 2026-04-24 00:16:03.704 [INFO][3885] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.826604 containerd[1582]: 2026-04-24 00:16:03.754 [INFO][3908] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" HandleID="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Workload="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.763 [INFO][3908] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" HandleID="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Workload="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-215-230", "pod":"whisker-5449f96cf4-8kk9b", "timestamp":"2026-04-24 00:16:03.75498014 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001126e0)} Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.763 [INFO][3908] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.763 [INFO][3908] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.764 [INFO][3908] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.767 [INFO][3908] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" host="172-234-215-230" Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.773 [INFO][3908] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.778 [INFO][3908] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.781 [INFO][3908] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:03.827863 containerd[1582]: 2026-04-24 00:16:03.783 [INFO][3908] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.783 [INFO][3908] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" host="172-234-215-230" Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.784 [INFO][3908] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.788 [INFO][3908] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" host="172-234-215-230" Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.792 [INFO][3908] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.194/26] block=192.168.20.192/26 handle="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" host="172-234-215-230" Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.792 [INFO][3908] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.194/26] handle="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" host="172-234-215-230" Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.793 [INFO][3908] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:03.828135 containerd[1582]: 2026-04-24 00:16:03.793 [INFO][3908] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.194/26] IPv6=[] ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" HandleID="k8s-pod-network.95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Workload="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.828290 containerd[1582]: 2026-04-24 00:16:03.796 [INFO][3885] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0", GenerateName:"whisker-5449f96cf4-", Namespace:"calico-system", SelfLink:"", UID:"9fd10aca-970d-441f-a3a5-ae0df4b0e09e", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5449f96cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"whisker-5449f96cf4-8kk9b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicca85d22a30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:03.828290 containerd[1582]: 2026-04-24 00:16:03.796 [INFO][3885] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.194/32] ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.828383 containerd[1582]: 2026-04-24 00:16:03.796 [INFO][3885] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicca85d22a30 ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.828383 containerd[1582]: 2026-04-24 00:16:03.802 [INFO][3885] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.828434 containerd[1582]: 2026-04-24 00:16:03.803 [INFO][3885] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0", GenerateName:"whisker-5449f96cf4-", Namespace:"calico-system", SelfLink:"", UID:"9fd10aca-970d-441f-a3a5-ae0df4b0e09e", ResourceVersion:"909", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 16, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5449f96cf4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad", Pod:"whisker-5449f96cf4-8kk9b", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.20.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicca85d22a30", MAC:"de:6a:9c:22:c7:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:03.828493 containerd[1582]: 2026-04-24 00:16:03.813 [INFO][3885] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" Namespace="calico-system" Pod="whisker-5449f96cf4-8kk9b" WorkloadEndpoint="172--234--215--230-k8s-whisker--5449f96cf4--8kk9b-eth0" Apr 24 00:16:03.859656 containerd[1582]: time="2026-04-24T00:16:03.858330684Z" level=info msg="connecting to shim 95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad" address="unix:///run/containerd/s/c35ab672257fe36fbf349550ee0a32ae1d2e039c3f1396a031d073b57d3d909c" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:03.911783 systemd[1]: Started cri-containerd-95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad.scope - libcontainer container 95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad. Apr 24 00:16:04.034425 containerd[1582]: time="2026-04-24T00:16:04.033785308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5449f96cf4-8kk9b,Uid:9fd10aca-970d-441f-a3a5-ae0df4b0e09e,Namespace:calico-system,Attempt:0,} returns sandbox id \"95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad\"" Apr 24 00:16:04.201578 kubelet[2756]: I0424 00:16:04.201485 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:16:04.408873 systemd-networkd[1428]: calif03287d48a5: Gained IPv6LL Apr 24 00:16:04.851504 systemd-networkd[1428]: vxlan.calico: Link UP Apr 24 00:16:04.851517 systemd-networkd[1428]: vxlan.calico: Gained carrier Apr 24 00:16:04.984880 systemd-networkd[1428]: calicca85d22a30: Gained IPv6LL Apr 24 00:16:05.072587 kubelet[2756]: I0424 00:16:05.072540 2756 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="931ceb24-9d3b-411d-bab2-39cfe6a8a056" path="/var/lib/kubelet/pods/931ceb24-9d3b-411d-bab2-39cfe6a8a056/volumes" Apr 24 00:16:05.410660 containerd[1582]: time="2026-04-24T00:16:05.409872731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:05.427866 containerd[1582]: time="2026-04-24T00:16:05.411576945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 24 00:16:05.427866 containerd[1582]: time="2026-04-24T00:16:05.416818924Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:05.427866 containerd[1582]: time="2026-04-24T00:16:05.419547350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:05.427866 containerd[1582]: time="2026-04-24T00:16:05.420797472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 1.98944649s" Apr 24 00:16:05.427866 containerd[1582]: time="2026-04-24T00:16:05.420822512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 24 00:16:05.427866 containerd[1582]: time="2026-04-24T00:16:05.422415345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 24 00:16:05.429236 containerd[1582]: time="2026-04-24T00:16:05.429194888Z" level=info msg="CreateContainer within sandbox \"4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 24 00:16:05.441295 containerd[1582]: time="2026-04-24T00:16:05.436739291Z" level=info msg="Container cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:05.449504 containerd[1582]: time="2026-04-24T00:16:05.449465035Z" level=info msg="CreateContainer within sandbox \"4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0\"" Apr 24 00:16:05.450302 containerd[1582]: time="2026-04-24T00:16:05.450239367Z" level=info msg="StartContainer for \"cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0\"" Apr 24 00:16:05.452037 containerd[1582]: time="2026-04-24T00:16:05.451979339Z" level=info msg="connecting to shim cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0" address="unix:///run/containerd/s/a2199e9a63b24ff583753d25f6988cbbbcea8ba9aab384b4cc278b6a16caff82" protocol=ttrpc version=3 Apr 24 00:16:05.492813 systemd[1]: Started cri-containerd-cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0.scope - libcontainer container cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0. Apr 24 00:16:05.565069 containerd[1582]: time="2026-04-24T00:16:05.564853219Z" level=info msg="StartContainer for \"cc72b9d8c9a0889c1f47a975129d3af891fd7b8d2c439a35d43afd3dde02cdd0\" returns successfully" Apr 24 00:16:05.737339 kubelet[2756]: I0424 00:16:05.736572 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 24 00:16:06.255368 containerd[1582]: time="2026-04-24T00:16:06.255311415Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:06.256153 containerd[1582]: time="2026-04-24T00:16:06.256089825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 24 00:16:06.257105 containerd[1582]: time="2026-04-24T00:16:06.256828937Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:06.258615 containerd[1582]: time="2026-04-24T00:16:06.258579390Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:06.259489 containerd[1582]: time="2026-04-24T00:16:06.259453342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 837.014797ms" Apr 24 00:16:06.259540 containerd[1582]: time="2026-04-24T00:16:06.259492632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 24 00:16:06.261040 containerd[1582]: time="2026-04-24T00:16:06.261008474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 24 00:16:06.265518 containerd[1582]: time="2026-04-24T00:16:06.265473483Z" level=info msg="CreateContainer within sandbox \"95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 24 00:16:06.274649 containerd[1582]: time="2026-04-24T00:16:06.272847576Z" level=info msg="Container 952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:06.285407 containerd[1582]: time="2026-04-24T00:16:06.285343479Z" level=info msg="CreateContainer within sandbox \"95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda\"" Apr 24 00:16:06.286162 containerd[1582]: time="2026-04-24T00:16:06.286119119Z" level=info msg="StartContainer for \"952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda\"" Apr 24 00:16:06.287756 containerd[1582]: time="2026-04-24T00:16:06.287700473Z" level=info msg="connecting to shim 952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda" address="unix:///run/containerd/s/c35ab672257fe36fbf349550ee0a32ae1d2e039c3f1396a031d073b57d3d909c" protocol=ttrpc version=3 Apr 24 00:16:06.310794 systemd[1]: Started cri-containerd-952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda.scope - libcontainer container 952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda. Apr 24 00:16:06.376509 containerd[1582]: time="2026-04-24T00:16:06.376202912Z" level=info msg="StartContainer for \"952aa07a9e11597f515aac0f543613febf07dca4f5f79c379fe7f0160c2b7dda\" returns successfully" Apr 24 00:16:06.392989 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Apr 24 00:16:07.185937 containerd[1582]: time="2026-04-24T00:16:07.185848979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:07.187460 containerd[1582]: time="2026-04-24T00:16:07.187386072Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 24 00:16:07.188133 containerd[1582]: time="2026-04-24T00:16:07.188083803Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:07.190552 containerd[1582]: time="2026-04-24T00:16:07.190508988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:07.192120 containerd[1582]: time="2026-04-24T00:16:07.191651770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 930.586506ms" Apr 24 00:16:07.192120 containerd[1582]: time="2026-04-24T00:16:07.191690710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 24 00:16:07.194274 containerd[1582]: time="2026-04-24T00:16:07.193387562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 24 00:16:07.196855 containerd[1582]: time="2026-04-24T00:16:07.196816679Z" level=info msg="CreateContainer within sandbox \"4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 24 00:16:07.206663 containerd[1582]: time="2026-04-24T00:16:07.205831624Z" level=info msg="Container 6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:07.237669 containerd[1582]: time="2026-04-24T00:16:07.237594480Z" level=info msg="CreateContainer within sandbox \"4141814a81457e9df8fa6d1ccb511837ac28d4fbae8e6bf803eb1bc0b939ce68\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7\"" Apr 24 00:16:07.239099 containerd[1582]: time="2026-04-24T00:16:07.239052332Z" level=info msg="StartContainer for \"6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7\"" Apr 24 00:16:07.243003 containerd[1582]: time="2026-04-24T00:16:07.242937999Z" level=info msg="connecting to shim 6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7" address="unix:///run/containerd/s/a2199e9a63b24ff583753d25f6988cbbbcea8ba9aab384b4cc278b6a16caff82" protocol=ttrpc version=3 Apr 24 00:16:07.273189 systemd[1]: Started cri-containerd-6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7.scope - libcontainer container 6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7. Apr 24 00:16:07.354067 containerd[1582]: time="2026-04-24T00:16:07.353989953Z" level=info msg="StartContainer for \"6072961195f67312b1c2218a0a6e1c1985a6ee937c37ca44c287215a42bde2e7\" returns successfully" Apr 24 00:16:08.129235 kubelet[2756]: I0424 00:16:08.129204 2756 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 24 00:16:08.130589 kubelet[2756]: I0424 00:16:08.130097 2756 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 24 00:16:08.256998 kubelet[2756]: I0424 00:16:08.256931 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k6mmr" podStartSLOduration=15.49390997 podStartE2EDuration="19.256914342s" podCreationTimestamp="2026-04-24 00:15:49 +0000 UTC" firstStartedPulling="2026-04-24 00:16:03.43021162 +0000 UTC m=+32.477843748" lastFinishedPulling="2026-04-24 00:16:07.193215992 +0000 UTC m=+36.240848120" observedRunningTime="2026-04-24 00:16:08.25586267 +0000 UTC m=+37.303494818" watchObservedRunningTime="2026-04-24 00:16:08.256914342 +0000 UTC m=+37.304546470" Apr 24 00:16:08.446867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3020219757.mount: Deactivated successfully. Apr 24 00:16:08.462016 containerd[1582]: time="2026-04-24T00:16:08.461214310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:08.462016 containerd[1582]: time="2026-04-24T00:16:08.461985501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 24 00:16:08.462909 containerd[1582]: time="2026-04-24T00:16:08.462860473Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:08.464714 containerd[1582]: time="2026-04-24T00:16:08.464448495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:08.465600 containerd[1582]: time="2026-04-24T00:16:08.465575167Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 1.271293703s" Apr 24 00:16:08.465708 containerd[1582]: time="2026-04-24T00:16:08.465691657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 24 00:16:08.470757 containerd[1582]: time="2026-04-24T00:16:08.470718005Z" level=info msg="CreateContainer within sandbox \"95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 24 00:16:08.476549 containerd[1582]: time="2026-04-24T00:16:08.476529665Z" level=info msg="Container 531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:08.488848 containerd[1582]: time="2026-04-24T00:16:08.488815277Z" level=info msg="CreateContainer within sandbox \"95c297931fa22c251b669f99325dfbf94b2a74765e61429747b4ab3a3af301ad\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5\"" Apr 24 00:16:08.489786 containerd[1582]: time="2026-04-24T00:16:08.489672648Z" level=info msg="StartContainer for \"531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5\"" Apr 24 00:16:08.491737 containerd[1582]: time="2026-04-24T00:16:08.491716241Z" level=info msg="connecting to shim 531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5" address="unix:///run/containerd/s/c35ab672257fe36fbf349550ee0a32ae1d2e039c3f1396a031d073b57d3d909c" protocol=ttrpc version=3 Apr 24 00:16:08.520911 systemd[1]: Started cri-containerd-531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5.scope - libcontainer container 531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5. Apr 24 00:16:08.586815 containerd[1582]: time="2026-04-24T00:16:08.586176903Z" level=info msg="StartContainer for \"531574dc17f670ec12bb41cd60bb76fd4dfac93665be2a268c270045ec700da5\" returns successfully" Apr 24 00:16:09.255109 kubelet[2756]: I0424 00:16:09.255027 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5449f96cf4-8kk9b" podStartSLOduration=1.8255140170000002 podStartE2EDuration="6.255007451s" podCreationTimestamp="2026-04-24 00:16:03 +0000 UTC" firstStartedPulling="2026-04-24 00:16:04.037026684 +0000 UTC m=+33.084658812" lastFinishedPulling="2026-04-24 00:16:08.466520108 +0000 UTC m=+37.514152246" observedRunningTime="2026-04-24 00:16:09.254237529 +0000 UTC m=+38.301869667" watchObservedRunningTime="2026-04-24 00:16:09.255007451 +0000 UTC m=+38.302639579" Apr 24 00:16:13.068139 kubelet[2756]: E0424 00:16:13.067761 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:13.070017 containerd[1582]: time="2026-04-24T00:16:13.069136550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smrjm,Uid:72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:13.210270 systemd-networkd[1428]: caliab5339b13ec: Link UP Apr 24 00:16:13.210535 systemd-networkd[1428]: caliab5339b13ec: Gained carrier Apr 24 00:16:13.233839 containerd[1582]: 2026-04-24 00:16:13.129 [INFO][4365] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0 coredns-674b8bbfcf- kube-system 72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4 839 0 2026-04-24 00:15:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-215-230 coredns-674b8bbfcf-smrjm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab5339b13ec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-" Apr 24 00:16:13.233839 containerd[1582]: 2026-04-24 00:16:13.129 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.233839 containerd[1582]: 2026-04-24 00:16:13.158 [INFO][4378] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" HandleID="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Workload="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.166 [INFO][4378] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" HandleID="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Workload="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000285a50), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-215-230", "pod":"coredns-674b8bbfcf-smrjm", "timestamp":"2026-04-24 00:16:13.158793844 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000534f20)} Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.166 [INFO][4378] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.166 [INFO][4378] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.166 [INFO][4378] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.169 [INFO][4378] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" host="172-234-215-230" Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.176 [INFO][4378] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.183 [INFO][4378] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.185 [INFO][4378] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:13.234087 containerd[1582]: 2026-04-24 00:16:13.188 [INFO][4378] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.188 [INFO][4378] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" host="172-234-215-230" Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.190 [INFO][4378] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.195 [INFO][4378] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" host="172-234-215-230" Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.200 [INFO][4378] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.195/26] block=192.168.20.192/26 handle="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" host="172-234-215-230" Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.201 [INFO][4378] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.195/26] handle="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" host="172-234-215-230" Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.201 [INFO][4378] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:13.234742 containerd[1582]: 2026-04-24 00:16:13.201 [INFO][4378] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.195/26] IPv6=[] ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" HandleID="k8s-pod-network.92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Workload="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.234889 containerd[1582]: 2026-04-24 00:16:13.205 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"coredns-674b8bbfcf-smrjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab5339b13ec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:13.234889 containerd[1582]: 2026-04-24 00:16:13.205 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.195/32] ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.234889 containerd[1582]: 2026-04-24 00:16:13.205 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab5339b13ec ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.234889 containerd[1582]: 2026-04-24 00:16:13.209 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.234889 containerd[1582]: 2026-04-24 00:16:13.209 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e", Pod:"coredns-674b8bbfcf-smrjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab5339b13ec", MAC:"ea:e7:06:6b:dd:8f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:13.234889 containerd[1582]: 2026-04-24 00:16:13.223 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" Namespace="kube-system" Pod="coredns-674b8bbfcf-smrjm" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--smrjm-eth0" Apr 24 00:16:13.266393 containerd[1582]: time="2026-04-24T00:16:13.266355837Z" level=info msg="connecting to shim 92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e" address="unix:///run/containerd/s/a8569c64a19478db9e2b695658c429f71bb18a91f63177a7986a738e9ce659e1" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:13.300841 systemd[1]: Started cri-containerd-92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e.scope - libcontainer container 92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e. Apr 24 00:16:13.362108 containerd[1582]: time="2026-04-24T00:16:13.362017002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-smrjm,Uid:72be5cc9-be34-45a3-b8c1-d04e0ffcd2d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e\"" Apr 24 00:16:13.362742 kubelet[2756]: E0424 00:16:13.362709 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:13.370132 containerd[1582]: time="2026-04-24T00:16:13.370006644Z" level=info msg="CreateContainer within sandbox \"92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:16:13.381523 containerd[1582]: time="2026-04-24T00:16:13.381060080Z" level=info msg="Container a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:13.388779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3118318683.mount: Deactivated successfully. Apr 24 00:16:13.392038 containerd[1582]: time="2026-04-24T00:16:13.391997957Z" level=info msg="CreateContainer within sandbox \"92c680db70a7544e1cbf403098425782da2645702d734fc4979c546c00794d2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885\"" Apr 24 00:16:13.393439 containerd[1582]: time="2026-04-24T00:16:13.393419489Z" level=info msg="StartContainer for \"a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885\"" Apr 24 00:16:13.396263 containerd[1582]: time="2026-04-24T00:16:13.396227303Z" level=info msg="connecting to shim a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885" address="unix:///run/containerd/s/a8569c64a19478db9e2b695658c429f71bb18a91f63177a7986a738e9ce659e1" protocol=ttrpc version=3 Apr 24 00:16:13.417797 systemd[1]: Started cri-containerd-a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885.scope - libcontainer container a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885. Apr 24 00:16:13.458802 containerd[1582]: time="2026-04-24T00:16:13.458754937Z" level=info msg="StartContainer for \"a24a73e287817ed7606d566732c264c5725db0b7aa42e05f37bc3f7b9e75c885\" returns successfully" Apr 24 00:16:14.068603 kubelet[2756]: E0424 00:16:14.068227 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:14.069872 containerd[1582]: time="2026-04-24T00:16:14.069813727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-pfklg,Uid:0f5c8edb-698c-49fe-9bda-1ab98ebf3b73,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:14.070069 containerd[1582]: time="2026-04-24T00:16:14.069957337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-67j2v,Uid:d2ade395-3c55-4406-a8dd-0bc70a4e5f7d,Namespace:kube-system,Attempt:0,}" Apr 24 00:16:14.071067 containerd[1582]: time="2026-04-24T00:16:14.071044759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8949555b5-f5zck,Uid:97df688a-49d4-4441-94d2-8a46a7bf5835,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:14.262550 kubelet[2756]: E0424 00:16:14.262233 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:14.278476 systemd-networkd[1428]: calic28542dc1c2: Link UP Apr 24 00:16:14.281610 systemd-networkd[1428]: calic28542dc1c2: Gained carrier Apr 24 00:16:14.289300 kubelet[2756]: I0424 00:16:14.289071 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-smrjm" podStartSLOduration=36.288910601 podStartE2EDuration="36.288910601s" podCreationTimestamp="2026-04-24 00:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:16:14.286919858 +0000 UTC m=+43.334551986" watchObservedRunningTime="2026-04-24 00:16:14.288910601 +0000 UTC m=+43.336542729" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.172 [INFO][4477] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0 coredns-674b8bbfcf- kube-system d2ade395-3c55-4406-a8dd-0bc70a4e5f7d 841 0 2026-04-24 00:15:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-215-230 coredns-674b8bbfcf-67j2v eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic28542dc1c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.173 [INFO][4477] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.218 [INFO][4517] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" HandleID="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Workload="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.225 [INFO][4517] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" HandleID="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Workload="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3770), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-215-230", "pod":"coredns-674b8bbfcf-67j2v", "timestamp":"2026-04-24 00:16:14.218162957 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000281760)} Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.225 [INFO][4517] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.225 [INFO][4517] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.225 [INFO][4517] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.228 [INFO][4517] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.237 [INFO][4517] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.245 [INFO][4517] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.247 [INFO][4517] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.249 [INFO][4517] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.250 [INFO][4517] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.251 [INFO][4517] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41 Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.257 [INFO][4517] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.264 [INFO][4517] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.196/26] block=192.168.20.192/26 handle="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.264 [INFO][4517] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.196/26] handle="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" host="172-234-215-230" Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.266 [INFO][4517] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:14.304933 containerd[1582]: 2026-04-24 00:16:14.266 [INFO][4517] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.196/26] IPv6=[] ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" HandleID="k8s-pod-network.6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Workload="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.305680 containerd[1582]: 2026-04-24 00:16:14.271 [INFO][4477] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2ade395-3c55-4406-a8dd-0bc70a4e5f7d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"coredns-674b8bbfcf-67j2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28542dc1c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:14.305680 containerd[1582]: 2026-04-24 00:16:14.272 [INFO][4477] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.196/32] ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.305680 containerd[1582]: 2026-04-24 00:16:14.272 [INFO][4477] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic28542dc1c2 ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.305680 containerd[1582]: 2026-04-24 00:16:14.282 [INFO][4477] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.305680 containerd[1582]: 2026-04-24 00:16:14.284 [INFO][4477] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d2ade395-3c55-4406-a8dd-0bc70a4e5f7d", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41", Pod:"coredns-674b8bbfcf-67j2v", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.20.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic28542dc1c2", MAC:"f2:ac:c9:a5:94:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:14.305680 containerd[1582]: 2026-04-24 00:16:14.300 [INFO][4477] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" Namespace="kube-system" Pod="coredns-674b8bbfcf-67j2v" WorkloadEndpoint="172--234--215--230-k8s-coredns--674b8bbfcf--67j2v-eth0" Apr 24 00:16:14.356976 containerd[1582]: time="2026-04-24T00:16:14.356786561Z" level=info msg="connecting to shim 6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41" address="unix:///run/containerd/s/df53ba32ed1a16f3f8f245434a0c918d547a26d1a23bf17da9d67709b53676f0" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:14.392853 systemd-networkd[1428]: caliab5339b13ec: Gained IPv6LL Apr 24 00:16:14.430225 systemd-networkd[1428]: cali4ab57fb35b6: Link UP Apr 24 00:16:14.432160 systemd-networkd[1428]: cali4ab57fb35b6: Gained carrier Apr 24 00:16:14.433781 systemd[1]: Started cri-containerd-6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41.scope - libcontainer container 6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41. Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.153 [INFO][4474] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0 calico-apiserver-777b497fdb- calico-system 0f5c8edb-698c-49fe-9bda-1ab98ebf3b73 840 0 2026-04-24 00:15:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:777b497fdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-215-230 calico-apiserver-777b497fdb-pfklg eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4ab57fb35b6 [] [] }} ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.153 [INFO][4474] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.226 [INFO][4510] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" HandleID="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Workload="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.235 [INFO][4510] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" HandleID="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Workload="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f3ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-215-230", "pod":"calico-apiserver-777b497fdb-pfklg", "timestamp":"2026-04-24 00:16:14.226161559 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001886e0)} Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.235 [INFO][4510] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.265 [INFO][4510] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.265 [INFO][4510] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.338 [INFO][4510] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.349 [INFO][4510] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.363 [INFO][4510] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.367 [INFO][4510] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.374 [INFO][4510] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.375 [INFO][4510] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.380 [INFO][4510] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660 Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.389 [INFO][4510] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.415 [INFO][4510] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.197/26] block=192.168.20.192/26 handle="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.415 [INFO][4510] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.197/26] handle="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" host="172-234-215-230" Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.416 [INFO][4510] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:14.469544 containerd[1582]: 2026-04-24 00:16:14.416 [INFO][4510] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.197/26] IPv6=[] ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" HandleID="k8s-pod-network.f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Workload="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.470998 containerd[1582]: 2026-04-24 00:16:14.421 [INFO][4474] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0", GenerateName:"calico-apiserver-777b497fdb-", Namespace:"calico-system", SelfLink:"", UID:"0f5c8edb-698c-49fe-9bda-1ab98ebf3b73", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777b497fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"calico-apiserver-777b497fdb-pfklg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4ab57fb35b6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:14.470998 containerd[1582]: 2026-04-24 00:16:14.421 [INFO][4474] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.197/32] ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.470998 containerd[1582]: 2026-04-24 00:16:14.421 [INFO][4474] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ab57fb35b6 ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.470998 containerd[1582]: 2026-04-24 00:16:14.433 [INFO][4474] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.470998 containerd[1582]: 2026-04-24 00:16:14.435 [INFO][4474] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0", GenerateName:"calico-apiserver-777b497fdb-", Namespace:"calico-system", SelfLink:"", UID:"0f5c8edb-698c-49fe-9bda-1ab98ebf3b73", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777b497fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660", Pod:"calico-apiserver-777b497fdb-pfklg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4ab57fb35b6", MAC:"86:32:8b:88:07:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:14.470998 containerd[1582]: 2026-04-24 00:16:14.463 [INFO][4474] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-pfklg" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--pfklg-eth0" Apr 24 00:16:14.511896 systemd-networkd[1428]: cali9544882176a: Link UP Apr 24 00:16:14.513729 systemd-networkd[1428]: cali9544882176a: Gained carrier Apr 24 00:16:14.529914 containerd[1582]: time="2026-04-24T00:16:14.529866257Z" level=info msg="connecting to shim f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660" address="unix:///run/containerd/s/9989b68e3285b200a7c2c640d8b895a807d265ad752b8d5ac495e006704eaa5c" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.189 [INFO][4480] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0 calico-kube-controllers-8949555b5- calico-system 97df688a-49d4-4441-94d2-8a46a7bf5835 845 0 2026-04-24 00:15:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8949555b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-215-230 calico-kube-controllers-8949555b5-f5zck eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9544882176a [] [] }} ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.189 [INFO][4480] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.251 [INFO][4523] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" HandleID="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Workload="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.264 [INFO][4523] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" HandleID="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Workload="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ea0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-215-230", "pod":"calico-kube-controllers-8949555b5-f5zck", "timestamp":"2026-04-24 00:16:14.251278536 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038f080)} Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.265 [INFO][4523] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.416 [INFO][4523] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.416 [INFO][4523] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.440 [INFO][4523] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.455 [INFO][4523] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.467 [INFO][4523] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.471 [INFO][4523] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.477 [INFO][4523] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.477 [INFO][4523] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.481 [INFO][4523] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.486 [INFO][4523] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.496 [INFO][4523] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.198/26] block=192.168.20.192/26 handle="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.496 [INFO][4523] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.198/26] handle="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" host="172-234-215-230" Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.496 [INFO][4523] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:14.557613 containerd[1582]: 2026-04-24 00:16:14.497 [INFO][4523] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.198/26] IPv6=[] ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" HandleID="k8s-pod-network.ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Workload="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.558283 containerd[1582]: 2026-04-24 00:16:14.504 [INFO][4480] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0", GenerateName:"calico-kube-controllers-8949555b5-", Namespace:"calico-system", SelfLink:"", UID:"97df688a-49d4-4441-94d2-8a46a7bf5835", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8949555b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"calico-kube-controllers-8949555b5-f5zck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9544882176a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:14.558283 containerd[1582]: 2026-04-24 00:16:14.505 [INFO][4480] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.198/32] ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.558283 containerd[1582]: 2026-04-24 00:16:14.505 [INFO][4480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9544882176a ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.558283 containerd[1582]: 2026-04-24 00:16:14.515 [INFO][4480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.558283 containerd[1582]: 2026-04-24 00:16:14.517 [INFO][4480] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0", GenerateName:"calico-kube-controllers-8949555b5-", Namespace:"calico-system", SelfLink:"", UID:"97df688a-49d4-4441-94d2-8a46a7bf5835", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8949555b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd", Pod:"calico-kube-controllers-8949555b5-f5zck", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.20.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9544882176a", MAC:"3e:f8:b6:f6:c6:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:14.558283 containerd[1582]: 2026-04-24 00:16:14.545 [INFO][4480] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" Namespace="calico-system" Pod="calico-kube-controllers-8949555b5-f5zck" WorkloadEndpoint="172--234--215--230-k8s-calico--kube--controllers--8949555b5--f5zck-eth0" Apr 24 00:16:14.592400 containerd[1582]: time="2026-04-24T00:16:14.592341409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-67j2v,Uid:d2ade395-3c55-4406-a8dd-0bc70a4e5f7d,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41\"" Apr 24 00:16:14.594478 kubelet[2756]: E0424 00:16:14.594434 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:14.595783 systemd[1]: Started cri-containerd-f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660.scope - libcontainer container f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660. Apr 24 00:16:14.601043 containerd[1582]: time="2026-04-24T00:16:14.600335781Z" level=info msg="CreateContainer within sandbox \"6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 24 00:16:14.612133 containerd[1582]: time="2026-04-24T00:16:14.611762718Z" level=info msg="Container 650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:14.615487 containerd[1582]: time="2026-04-24T00:16:14.614786782Z" level=info msg="connecting to shim ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd" address="unix:///run/containerd/s/d92cebbc37223cf8ab5e1225b26929633933fb16b7ee293684044f2e782ef5e1" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:14.625845 containerd[1582]: time="2026-04-24T00:16:14.625218988Z" level=info msg="CreateContainer within sandbox \"6b09c823f6e7a6b1906ba521d6becb5c30055a79e06c80f81d00492bc6b71f41\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f\"" Apr 24 00:16:14.629117 containerd[1582]: time="2026-04-24T00:16:14.629086163Z" level=info msg="StartContainer for \"650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f\"" Apr 24 00:16:14.636084 containerd[1582]: time="2026-04-24T00:16:14.635848583Z" level=info msg="connecting to shim 650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f" address="unix:///run/containerd/s/df53ba32ed1a16f3f8f245434a0c918d547a26d1a23bf17da9d67709b53676f0" protocol=ttrpc version=3 Apr 24 00:16:14.667125 systemd[1]: Started cri-containerd-ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd.scope - libcontainer container ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd. Apr 24 00:16:14.683774 systemd[1]: Started cri-containerd-650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f.scope - libcontainer container 650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f. Apr 24 00:16:14.740055 containerd[1582]: time="2026-04-24T00:16:14.739994487Z" level=info msg="StartContainer for \"650e47dc871cadb332066142ecc0070134db0e48f753686b459a99b073a8d36f\" returns successfully" Apr 24 00:16:14.774202 containerd[1582]: time="2026-04-24T00:16:14.771766244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-pfklg,Uid:0f5c8edb-698c-49fe-9bda-1ab98ebf3b73,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660\"" Apr 24 00:16:14.775669 containerd[1582]: time="2026-04-24T00:16:14.775246420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 00:16:14.830131 containerd[1582]: time="2026-04-24T00:16:14.830081180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8949555b5-f5zck,Uid:97df688a-49d4-4441-94d2-8a46a7bf5835,Namespace:calico-system,Attempt:0,} returns sandbox id \"ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd\"" Apr 24 00:16:15.068461 containerd[1582]: time="2026-04-24T00:16:15.068406050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-qjw6h,Uid:0feedfd4-cd8a-4596-9bcc-87eb0aa67f44,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:15.224002 systemd-networkd[1428]: calic84a0a317fd: Link UP Apr 24 00:16:15.225606 systemd-networkd[1428]: calic84a0a317fd: Gained carrier Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.128 [INFO][4734] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0 goldmane-57885fdd4c- calico-system 0feedfd4-cd8a-4596-9bcc-87eb0aa67f44 842 0 2026-04-24 00:15:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:57885fdd4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s 172-234-215-230 goldmane-57885fdd4c-qjw6h eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic84a0a317fd [] [] }} ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.129 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.169 [INFO][4746] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" HandleID="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Workload="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.182 [INFO][4746] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" HandleID="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Workload="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000303420), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-215-230", "pod":"goldmane-57885fdd4c-qjw6h", "timestamp":"2026-04-24 00:16:15.169799037 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003731e0)} Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.182 [INFO][4746] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.182 [INFO][4746] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.182 [INFO][4746] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.185 [INFO][4746] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.190 [INFO][4746] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.194 [INFO][4746] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.196 [INFO][4746] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.199 [INFO][4746] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.199 [INFO][4746] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.201 [INFO][4746] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831 Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.204 [INFO][4746] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.212 [INFO][4746] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.199/26] block=192.168.20.192/26 handle="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.212 [INFO][4746] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.199/26] handle="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" host="172-234-215-230" Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.212 [INFO][4746] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:15.250283 containerd[1582]: 2026-04-24 00:16:15.213 [INFO][4746] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.199/26] IPv6=[] ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" HandleID="k8s-pod-network.8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Workload="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.251413 containerd[1582]: 2026-04-24 00:16:15.217 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"0feedfd4-cd8a-4596-9bcc-87eb0aa67f44", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"goldmane-57885fdd4c-qjw6h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic84a0a317fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:15.251413 containerd[1582]: 2026-04-24 00:16:15.218 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.199/32] ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.251413 containerd[1582]: 2026-04-24 00:16:15.218 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic84a0a317fd ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.251413 containerd[1582]: 2026-04-24 00:16:15.226 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.251413 containerd[1582]: 2026-04-24 00:16:15.230 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0", GenerateName:"goldmane-57885fdd4c-", Namespace:"calico-system", SelfLink:"", UID:"0feedfd4-cd8a-4596-9bcc-87eb0aa67f44", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"57885fdd4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831", Pod:"goldmane-57885fdd4c-qjw6h", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.20.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic84a0a317fd", MAC:"9a:a4:dc:d8:41:56", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:15.251413 containerd[1582]: 2026-04-24 00:16:15.244 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" Namespace="calico-system" Pod="goldmane-57885fdd4c-qjw6h" WorkloadEndpoint="172--234--215--230-k8s-goldmane--57885fdd4c--qjw6h-eth0" Apr 24 00:16:15.288423 kubelet[2756]: E0424 00:16:15.288373 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:15.289827 kubelet[2756]: E0424 00:16:15.289490 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:15.301075 containerd[1582]: time="2026-04-24T00:16:15.300419186Z" level=info msg="connecting to shim 8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831" address="unix:///run/containerd/s/e2d27a9c79471f7977642b54be641d16080cee9d8638a318adeefcd57030910c" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:15.343472 kubelet[2756]: I0424 00:16:15.343306 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-67j2v" podStartSLOduration=37.343285387 podStartE2EDuration="37.343285387s" podCreationTimestamp="2026-04-24 00:15:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-24 00:16:15.315726308 +0000 UTC m=+44.363358456" watchObservedRunningTime="2026-04-24 00:16:15.343285387 +0000 UTC m=+44.390917515" Apr 24 00:16:15.347967 systemd[1]: Started cri-containerd-8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831.scope - libcontainer container 8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831. Apr 24 00:16:15.450240 containerd[1582]: time="2026-04-24T00:16:15.450188922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-57885fdd4c-qjw6h,Uid:0feedfd4-cd8a-4596-9bcc-87eb0aa67f44,Namespace:calico-system,Attempt:0,} returns sandbox id \"8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831\"" Apr 24 00:16:15.545080 systemd-networkd[1428]: calic28542dc1c2: Gained IPv6LL Apr 24 00:16:16.070487 containerd[1582]: time="2026-04-24T00:16:16.070401277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-2l7vc,Uid:7e12a673-c35b-4194-9f5f-bfd64649b8e2,Namespace:calico-system,Attempt:0,}" Apr 24 00:16:16.186735 systemd-networkd[1428]: cali9544882176a: Gained IPv6LL Apr 24 00:16:16.272433 systemd-networkd[1428]: cali79e9cff8c55: Link UP Apr 24 00:16:16.273705 systemd-networkd[1428]: cali79e9cff8c55: Gained carrier Apr 24 00:16:16.295416 kubelet[2756]: E0424 00:16:16.295361 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:16.296196 kubelet[2756]: E0424 00:16:16.296120 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.133 [INFO][4821] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0 calico-apiserver-777b497fdb- calico-system 7e12a673-c35b-4194-9f5f-bfd64649b8e2 843 0 2026-04-24 00:15:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:777b497fdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172-234-215-230 calico-apiserver-777b497fdb-2l7vc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali79e9cff8c55 [] [] }} ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.133 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.182 [INFO][4834] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" HandleID="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Workload="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.194 [INFO][4834] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" HandleID="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Workload="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103ec0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-215-230", "pod":"calico-apiserver-777b497fdb-2l7vc", "timestamp":"2026-04-24 00:16:16.182957768 +0000 UTC"}, Hostname:"172-234-215-230", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fe580)} Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.194 [INFO][4834] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.194 [INFO][4834] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.194 [INFO][4834] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-215-230' Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.197 [INFO][4834] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.202 [INFO][4834] ipam/ipam.go 409: Looking up existing affinities for host host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.207 [INFO][4834] ipam/ipam.go 526: Trying affinity for 192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.209 [INFO][4834] ipam/ipam.go 160: Attempting to load block cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.211 [INFO][4834] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.20.192/26 host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.211 [INFO][4834] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.20.192/26 handle="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.213 [INFO][4834] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548 Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.219 [INFO][4834] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.20.192/26 handle="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.237 [INFO][4834] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.20.200/26] block=192.168.20.192/26 handle="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.237 [INFO][4834] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.20.200/26] handle="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" host="172-234-215-230" Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.237 [INFO][4834] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 24 00:16:16.317944 containerd[1582]: 2026-04-24 00:16:16.237 [INFO][4834] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.20.200/26] IPv6=[] ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" HandleID="k8s-pod-network.5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Workload="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.321025 containerd[1582]: 2026-04-24 00:16:16.251 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0", GenerateName:"calico-apiserver-777b497fdb-", Namespace:"calico-system", SelfLink:"", UID:"7e12a673-c35b-4194-9f5f-bfd64649b8e2", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777b497fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"", Pod:"calico-apiserver-777b497fdb-2l7vc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali79e9cff8c55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:16.321025 containerd[1582]: 2026-04-24 00:16:16.251 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.20.200/32] ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.321025 containerd[1582]: 2026-04-24 00:16:16.251 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79e9cff8c55 ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.321025 containerd[1582]: 2026-04-24 00:16:16.273 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.321025 containerd[1582]: 2026-04-24 00:16:16.280 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0", GenerateName:"calico-apiserver-777b497fdb-", Namespace:"calico-system", SelfLink:"", UID:"7e12a673-c35b-4194-9f5f-bfd64649b8e2", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.April, 24, 0, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777b497fdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-215-230", ContainerID:"5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548", Pod:"calico-apiserver-777b497fdb-2l7vc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.20.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali79e9cff8c55", MAC:"92:b2:8f:1b:9e:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 24 00:16:16.321025 containerd[1582]: 2026-04-24 00:16:16.306 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" Namespace="calico-system" Pod="calico-apiserver-777b497fdb-2l7vc" WorkloadEndpoint="172--234--215--230-k8s-calico--apiserver--777b497fdb--2l7vc-eth0" Apr 24 00:16:16.387780 containerd[1582]: time="2026-04-24T00:16:16.387717639Z" level=info msg="connecting to shim 5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548" address="unix:///run/containerd/s/76746b20d890f4c7df3a2f7d546504060ddd9bc651de05fd61b60c6433054b67" namespace=k8s.io protocol=ttrpc version=3 Apr 24 00:16:16.442749 systemd-networkd[1428]: cali4ab57fb35b6: Gained IPv6LL Apr 24 00:16:16.454914 systemd[1]: Started cri-containerd-5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548.scope - libcontainer container 5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548. Apr 24 00:16:16.571751 containerd[1582]: time="2026-04-24T00:16:16.571382399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777b497fdb-2l7vc,Uid:7e12a673-c35b-4194-9f5f-bfd64649b8e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548\"" Apr 24 00:16:16.952870 systemd-networkd[1428]: calic84a0a317fd: Gained IPv6LL Apr 24 00:16:16.980081 containerd[1582]: time="2026-04-24T00:16:16.980042479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:16.981050 containerd[1582]: time="2026-04-24T00:16:16.980981250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 24 00:16:16.982411 containerd[1582]: time="2026-04-24T00:16:16.981626431Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:16.983834 containerd[1582]: time="2026-04-24T00:16:16.983797185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:16.984790 containerd[1582]: time="2026-04-24T00:16:16.984770656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 2.209495726s" Apr 24 00:16:16.985064 containerd[1582]: time="2026-04-24T00:16:16.985047956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 00:16:16.987900 containerd[1582]: time="2026-04-24T00:16:16.987879030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 24 00:16:16.992872 containerd[1582]: time="2026-04-24T00:16:16.992839287Z" level=info msg="CreateContainer within sandbox \"f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 00:16:17.002657 containerd[1582]: time="2026-04-24T00:16:17.001983360Z" level=info msg="Container b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:17.011549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2444154177.mount: Deactivated successfully. Apr 24 00:16:17.015390 containerd[1582]: time="2026-04-24T00:16:17.015359079Z" level=info msg="CreateContainer within sandbox \"f5b698bf0522719fd4b394eeb4d7e2f41fe2e4385fd81ab3ec48f3b8dca87660\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d\"" Apr 24 00:16:17.016122 containerd[1582]: time="2026-04-24T00:16:17.016083980Z" level=info msg="StartContainer for \"b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d\"" Apr 24 00:16:17.018258 containerd[1582]: time="2026-04-24T00:16:17.018095993Z" level=info msg="connecting to shim b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d" address="unix:///run/containerd/s/9989b68e3285b200a7c2c640d8b895a807d265ad752b8d5ac495e006704eaa5c" protocol=ttrpc version=3 Apr 24 00:16:17.049826 systemd[1]: Started cri-containerd-b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d.scope - libcontainer container b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d. Apr 24 00:16:17.115196 containerd[1582]: time="2026-04-24T00:16:17.115137788Z" level=info msg="StartContainer for \"b14fe8cc8d7d990963b00140539766c178e2c027caa36757865571b6f7fbb73d\" returns successfully" Apr 24 00:16:17.310236 kubelet[2756]: E0424 00:16:17.309606 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:16:17.337280 kubelet[2756]: I0424 00:16:17.337200 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-777b497fdb-pfklg" podStartSLOduration=26.124924737 podStartE2EDuration="28.337183877s" podCreationTimestamp="2026-04-24 00:15:49 +0000 UTC" firstStartedPulling="2026-04-24 00:16:14.774979959 +0000 UTC m=+43.822612087" lastFinishedPulling="2026-04-24 00:16:16.987239099 +0000 UTC m=+46.034871227" observedRunningTime="2026-04-24 00:16:17.336877717 +0000 UTC m=+46.384509845" watchObservedRunningTime="2026-04-24 00:16:17.337183877 +0000 UTC m=+46.384816005" Apr 24 00:16:17.784987 systemd-networkd[1428]: cali79e9cff8c55: Gained IPv6LL Apr 24 00:16:19.150944 containerd[1582]: time="2026-04-24T00:16:19.150890351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:19.151941 containerd[1582]: time="2026-04-24T00:16:19.151842423Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 24 00:16:19.152493 containerd[1582]: time="2026-04-24T00:16:19.152468684Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:19.154120 containerd[1582]: time="2026-04-24T00:16:19.154071325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:19.155676 containerd[1582]: time="2026-04-24T00:16:19.155649708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 2.167646058s" Apr 24 00:16:19.155722 containerd[1582]: time="2026-04-24T00:16:19.155678318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 24 00:16:19.158343 containerd[1582]: time="2026-04-24T00:16:19.158271252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 24 00:16:19.168654 containerd[1582]: time="2026-04-24T00:16:19.165503441Z" level=info msg="CreateContainer within sandbox \"ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 24 00:16:19.182036 containerd[1582]: time="2026-04-24T00:16:19.181648723Z" level=info msg="Container 6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:19.190778 containerd[1582]: time="2026-04-24T00:16:19.190746595Z" level=info msg="CreateContainer within sandbox \"ff11a3c3edc672caf19e4d70edb5c0cb96e273f43e7b9d1d6adc94bf49dda8cd\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279\"" Apr 24 00:16:19.192259 containerd[1582]: time="2026-04-24T00:16:19.192225778Z" level=info msg="StartContainer for \"6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279\"" Apr 24 00:16:19.194256 containerd[1582]: time="2026-04-24T00:16:19.194234710Z" level=info msg="connecting to shim 6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279" address="unix:///run/containerd/s/d92cebbc37223cf8ab5e1225b26929633933fb16b7ee293684044f2e782ef5e1" protocol=ttrpc version=3 Apr 24 00:16:19.225764 systemd[1]: Started cri-containerd-6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279.scope - libcontainer container 6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279. Apr 24 00:16:19.289092 containerd[1582]: time="2026-04-24T00:16:19.289049507Z" level=info msg="StartContainer for \"6e1a19e2fdab55067350775484b24b1b0fa1f1640202ae0c5ffe372af01a7279\" returns successfully" Apr 24 00:16:19.428183 kubelet[2756]: I0424 00:16:19.427392 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8949555b5-f5zck" podStartSLOduration=25.102543987 podStartE2EDuration="29.427373444s" podCreationTimestamp="2026-04-24 00:15:50 +0000 UTC" firstStartedPulling="2026-04-24 00:16:14.831438692 +0000 UTC m=+43.879070820" lastFinishedPulling="2026-04-24 00:16:19.156268139 +0000 UTC m=+48.203900277" observedRunningTime="2026-04-24 00:16:19.340237166 +0000 UTC m=+48.387869294" watchObservedRunningTime="2026-04-24 00:16:19.427373444 +0000 UTC m=+48.475005572" Apr 24 00:16:20.636312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3244866224.mount: Deactivated successfully. Apr 24 00:16:21.213879 containerd[1582]: time="2026-04-24T00:16:21.213828776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:21.216323 containerd[1582]: time="2026-04-24T00:16:21.216027069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 24 00:16:21.218108 containerd[1582]: time="2026-04-24T00:16:21.218073922Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:21.222239 containerd[1582]: time="2026-04-24T00:16:21.222213727Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:21.223184 containerd[1582]: time="2026-04-24T00:16:21.223162168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 2.064865746s" Apr 24 00:16:21.223450 containerd[1582]: time="2026-04-24T00:16:21.223434718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 24 00:16:21.227283 containerd[1582]: time="2026-04-24T00:16:21.226002962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 24 00:16:21.230452 containerd[1582]: time="2026-04-24T00:16:21.230294167Z" level=info msg="CreateContainer within sandbox \"8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 24 00:16:21.237789 containerd[1582]: time="2026-04-24T00:16:21.237761128Z" level=info msg="Container 55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:21.258072 containerd[1582]: time="2026-04-24T00:16:21.258017774Z" level=info msg="CreateContainer within sandbox \"8a9f8e8a500a3cd973774b98a13f7bd8447432049b1eccde22fc293d990f0831\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b\"" Apr 24 00:16:21.259232 containerd[1582]: time="2026-04-24T00:16:21.258818585Z" level=info msg="StartContainer for \"55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b\"" Apr 24 00:16:21.260936 containerd[1582]: time="2026-04-24T00:16:21.260890988Z" level=info msg="connecting to shim 55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b" address="unix:///run/containerd/s/e2d27a9c79471f7977642b54be641d16080cee9d8638a318adeefcd57030910c" protocol=ttrpc version=3 Apr 24 00:16:21.291926 systemd[1]: Started cri-containerd-55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b.scope - libcontainer container 55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b. Apr 24 00:16:21.374003 containerd[1582]: time="2026-04-24T00:16:21.373734885Z" level=info msg="StartContainer for \"55db194d10e0ed8fdf3ee4d04b3fbb8e6969431187b5a05d3afd44552593077b\" returns successfully" Apr 24 00:16:21.421813 containerd[1582]: time="2026-04-24T00:16:21.421709837Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 24 00:16:21.422814 containerd[1582]: time="2026-04-24T00:16:21.422787199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 24 00:16:21.425355 containerd[1582]: time="2026-04-24T00:16:21.425327732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 197.883298ms" Apr 24 00:16:21.425449 containerd[1582]: time="2026-04-24T00:16:21.425417352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 24 00:16:21.432917 containerd[1582]: time="2026-04-24T00:16:21.432033961Z" level=info msg="CreateContainer within sandbox \"5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 24 00:16:21.440715 containerd[1582]: time="2026-04-24T00:16:21.439830091Z" level=info msg="Container 09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982: CDI devices from CRI Config.CDIDevices: []" Apr 24 00:16:21.451428 containerd[1582]: time="2026-04-24T00:16:21.451403196Z" level=info msg="CreateContainer within sandbox \"5ee9eca5812bad114939fd2ee8e05fbeaf5054e8791ab13770f1f4d584938548\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982\"" Apr 24 00:16:21.452192 containerd[1582]: time="2026-04-24T00:16:21.452176147Z" level=info msg="StartContainer for \"09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982\"" Apr 24 00:16:21.454427 containerd[1582]: time="2026-04-24T00:16:21.454371779Z" level=info msg="connecting to shim 09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982" address="unix:///run/containerd/s/76746b20d890f4c7df3a2f7d546504060ddd9bc651de05fd61b60c6433054b67" protocol=ttrpc version=3 Apr 24 00:16:21.481848 systemd[1]: Started cri-containerd-09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982.scope - libcontainer container 09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982. Apr 24 00:16:21.550152 containerd[1582]: time="2026-04-24T00:16:21.550123034Z" level=info msg="StartContainer for \"09a08211929e5a46d2179eeb8890a608fdcf7a6c5907cbb786e678b5fe578982\" returns successfully" Apr 24 00:16:22.387649 kubelet[2756]: I0424 00:16:22.385970 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-57885fdd4c-qjw6h" podStartSLOduration=27.612706181 podStartE2EDuration="33.385951727s" podCreationTimestamp="2026-04-24 00:15:49 +0000 UTC" firstStartedPulling="2026-04-24 00:16:15.451276064 +0000 UTC m=+44.498908192" lastFinishedPulling="2026-04-24 00:16:21.22452161 +0000 UTC m=+50.272153738" observedRunningTime="2026-04-24 00:16:22.364450639 +0000 UTC m=+51.412082767" watchObservedRunningTime="2026-04-24 00:16:22.385951727 +0000 UTC m=+51.433583855" Apr 24 00:16:22.402364 kubelet[2756]: I0424 00:16:22.401171 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-777b497fdb-2l7vc" podStartSLOduration=28.548268055 podStartE2EDuration="33.401153686s" podCreationTimestamp="2026-04-24 00:15:49 +0000 UTC" firstStartedPulling="2026-04-24 00:16:16.573836543 +0000 UTC m=+45.621468671" lastFinishedPulling="2026-04-24 00:16:21.426722174 +0000 UTC m=+50.474354302" observedRunningTime="2026-04-24 00:16:22.386585607 +0000 UTC m=+51.434217735" watchObservedRunningTime="2026-04-24 00:16:22.401153686 +0000 UTC m=+51.448785824" Apr 24 00:17:00.068216 kubelet[2756]: E0424 00:17:00.068085 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:06.068889 kubelet[2756]: E0424 00:17:06.068847 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:08.067951 kubelet[2756]: E0424 00:17:08.067907 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:12.067625 kubelet[2756]: E0424 00:17:12.067580 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:22.739589 systemd[1]: Started sshd@7-172.234.215.230:22-80.82.70.133:60000.service - OpenSSH per-connection server daemon (80.82.70.133:60000). Apr 24 00:17:22.880487 sshd[5341]: Connection closed by 80.82.70.133 port 60000 Apr 24 00:17:22.882604 systemd[1]: sshd@7-172.234.215.230:22-80.82.70.133:60000.service: Deactivated successfully. Apr 24 00:17:23.069984 kubelet[2756]: E0424 00:17:23.068327 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:31.069421 kubelet[2756]: E0424 00:17:31.068972 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:36.068653 kubelet[2756]: E0424 00:17:36.068588 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:17:42.281781 systemd[1]: Started sshd@8-172.234.215.230:22-94.102.49.155:47206.service - OpenSSH per-connection server daemon (94.102.49.155:47206). Apr 24 00:17:42.402584 sshd[5452]: Connection closed by 94.102.49.155 port 47206 Apr 24 00:17:42.404476 systemd[1]: sshd@8-172.234.215.230:22-94.102.49.155:47206.service: Deactivated successfully. Apr 24 00:17:42.512414 systemd[1]: Started sshd@9-172.234.215.230:22-94.102.49.155:47212.service - OpenSSH per-connection server daemon (94.102.49.155:47212). Apr 24 00:17:42.746688 sshd[5457]: Connection closed by 94.102.49.155 port 47212 [preauth] Apr 24 00:17:42.749538 systemd[1]: sshd@9-172.234.215.230:22-94.102.49.155:47212.service: Deactivated successfully. Apr 24 00:17:50.379933 systemd[1]: Started sshd@10-172.234.215.230:22-20.229.252.112:51346.service - OpenSSH per-connection server daemon (20.229.252.112:51346). Apr 24 00:17:50.904226 sshd[5488]: Accepted publickey for core from 20.229.252.112 port 51346 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:17:50.906172 sshd-session[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:17:50.911301 systemd-logind[1555]: New session 8 of user core. Apr 24 00:17:50.916761 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 24 00:17:51.277488 sshd[5491]: Connection closed by 20.229.252.112 port 51346 Apr 24 00:17:51.278240 sshd-session[5488]: pam_unix(sshd:session): session closed for user core Apr 24 00:17:51.283412 systemd[1]: sshd@10-172.234.215.230:22-20.229.252.112:51346.service: Deactivated successfully. Apr 24 00:17:51.287495 systemd[1]: session-8.scope: Deactivated successfully. Apr 24 00:17:51.288757 systemd-logind[1555]: Session 8 logged out. Waiting for processes to exit. Apr 24 00:17:51.291373 systemd-logind[1555]: Removed session 8. Apr 24 00:17:56.388834 systemd[1]: Started sshd@11-172.234.215.230:22-20.229.252.112:49762.service - OpenSSH per-connection server daemon (20.229.252.112:49762). Apr 24 00:17:56.908609 sshd[5528]: Accepted publickey for core from 20.229.252.112 port 49762 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:17:56.910430 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:17:56.915802 systemd-logind[1555]: New session 9 of user core. Apr 24 00:17:56.919755 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 24 00:17:57.275614 sshd[5531]: Connection closed by 20.229.252.112 port 49762 Apr 24 00:17:57.276835 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Apr 24 00:17:57.281886 systemd[1]: sshd@11-172.234.215.230:22-20.229.252.112:49762.service: Deactivated successfully. Apr 24 00:17:57.284603 systemd[1]: session-9.scope: Deactivated successfully. Apr 24 00:17:57.287390 systemd-logind[1555]: Session 9 logged out. Waiting for processes to exit. Apr 24 00:17:57.289493 systemd-logind[1555]: Removed session 9. Apr 24 00:18:02.392879 systemd[1]: Started sshd@12-172.234.215.230:22-20.229.252.112:49776.service - OpenSSH per-connection server daemon (20.229.252.112:49776). Apr 24 00:18:02.939681 sshd[5549]: Accepted publickey for core from 20.229.252.112 port 49776 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:02.941281 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:02.947660 systemd-logind[1555]: New session 10 of user core. Apr 24 00:18:02.950793 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 24 00:18:03.315688 sshd[5552]: Connection closed by 20.229.252.112 port 49776 Apr 24 00:18:03.317425 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:03.324654 systemd[1]: sshd@12-172.234.215.230:22-20.229.252.112:49776.service: Deactivated successfully. Apr 24 00:18:03.328123 systemd[1]: session-10.scope: Deactivated successfully. Apr 24 00:18:03.330136 systemd-logind[1555]: Session 10 logged out. Waiting for processes to exit. Apr 24 00:18:03.332746 systemd-logind[1555]: Removed session 10. Apr 24 00:18:03.426868 systemd[1]: Started sshd@13-172.234.215.230:22-20.229.252.112:49778.service - OpenSSH per-connection server daemon (20.229.252.112:49778). Apr 24 00:18:03.955976 sshd[5565]: Accepted publickey for core from 20.229.252.112 port 49778 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:03.958181 sshd-session[5565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:03.963899 systemd-logind[1555]: New session 11 of user core. Apr 24 00:18:03.972790 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 24 00:18:04.360193 sshd[5568]: Connection closed by 20.229.252.112 port 49778 Apr 24 00:18:04.361815 sshd-session[5565]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:04.367061 systemd[1]: sshd@13-172.234.215.230:22-20.229.252.112:49778.service: Deactivated successfully. Apr 24 00:18:04.370234 systemd[1]: session-11.scope: Deactivated successfully. Apr 24 00:18:04.372099 systemd-logind[1555]: Session 11 logged out. Waiting for processes to exit. Apr 24 00:18:04.379184 systemd-logind[1555]: Removed session 11. Apr 24 00:18:04.464556 systemd[1]: Started sshd@14-172.234.215.230:22-20.229.252.112:49790.service - OpenSSH per-connection server daemon (20.229.252.112:49790). Apr 24 00:18:04.981896 sshd[5578]: Accepted publickey for core from 20.229.252.112 port 49790 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:04.984042 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:04.989165 systemd-logind[1555]: New session 12 of user core. Apr 24 00:18:04.992766 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 24 00:18:05.067651 kubelet[2756]: E0424 00:18:05.067569 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:18:05.340577 sshd[5581]: Connection closed by 20.229.252.112 port 49790 Apr 24 00:18:05.341450 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:05.346529 systemd-logind[1555]: Session 12 logged out. Waiting for processes to exit. Apr 24 00:18:05.347454 systemd[1]: sshd@14-172.234.215.230:22-20.229.252.112:49790.service: Deactivated successfully. Apr 24 00:18:05.349417 systemd[1]: session-12.scope: Deactivated successfully. Apr 24 00:18:05.352382 systemd-logind[1555]: Removed session 12. Apr 24 00:18:10.454136 systemd[1]: Started sshd@15-172.234.215.230:22-20.229.252.112:49738.service - OpenSSH per-connection server daemon (20.229.252.112:49738). Apr 24 00:18:11.000675 sshd[5619]: Accepted publickey for core from 20.229.252.112 port 49738 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:11.002208 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:11.007009 systemd-logind[1555]: New session 13 of user core. Apr 24 00:18:11.012937 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 24 00:18:11.374701 sshd[5622]: Connection closed by 20.229.252.112 port 49738 Apr 24 00:18:11.375553 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:11.380911 systemd[1]: sshd@15-172.234.215.230:22-20.229.252.112:49738.service: Deactivated successfully. Apr 24 00:18:11.383670 systemd[1]: session-13.scope: Deactivated successfully. Apr 24 00:18:11.384977 systemd-logind[1555]: Session 13 logged out. Waiting for processes to exit. Apr 24 00:18:11.387076 systemd-logind[1555]: Removed session 13. Apr 24 00:18:11.486842 systemd[1]: Started sshd@16-172.234.215.230:22-20.229.252.112:49740.service - OpenSSH per-connection server daemon (20.229.252.112:49740). Apr 24 00:18:12.038309 sshd[5634]: Accepted publickey for core from 20.229.252.112 port 49740 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:12.040773 sshd-session[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:12.047710 systemd-logind[1555]: New session 14 of user core. Apr 24 00:18:12.052806 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 24 00:18:12.584067 sshd[5637]: Connection closed by 20.229.252.112 port 49740 Apr 24 00:18:12.586108 sshd-session[5634]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:12.591097 systemd[1]: sshd@16-172.234.215.230:22-20.229.252.112:49740.service: Deactivated successfully. Apr 24 00:18:12.594255 systemd[1]: session-14.scope: Deactivated successfully. Apr 24 00:18:12.596198 systemd-logind[1555]: Session 14 logged out. Waiting for processes to exit. Apr 24 00:18:12.597616 systemd-logind[1555]: Removed session 14. Apr 24 00:18:12.690306 systemd[1]: Started sshd@17-172.234.215.230:22-20.229.252.112:49744.service - OpenSSH per-connection server daemon (20.229.252.112:49744). Apr 24 00:18:13.231501 sshd[5647]: Accepted publickey for core from 20.229.252.112 port 49744 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:13.232185 sshd-session[5647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:13.237421 systemd-logind[1555]: New session 15 of user core. Apr 24 00:18:13.245780 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 24 00:18:14.125992 sshd[5650]: Connection closed by 20.229.252.112 port 49744 Apr 24 00:18:14.127815 sshd-session[5647]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:14.132677 systemd-logind[1555]: Session 15 logged out. Waiting for processes to exit. Apr 24 00:18:14.133534 systemd[1]: sshd@17-172.234.215.230:22-20.229.252.112:49744.service: Deactivated successfully. Apr 24 00:18:14.136191 systemd[1]: session-15.scope: Deactivated successfully. Apr 24 00:18:14.137781 systemd-logind[1555]: Removed session 15. Apr 24 00:18:14.234837 systemd[1]: Started sshd@18-172.234.215.230:22-20.229.252.112:49756.service - OpenSSH per-connection server daemon (20.229.252.112:49756). Apr 24 00:18:14.756294 sshd[5666]: Accepted publickey for core from 20.229.252.112 port 49756 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:14.757925 sshd-session[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:14.763010 systemd-logind[1555]: New session 16 of user core. Apr 24 00:18:14.767778 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 24 00:18:15.241965 sshd[5669]: Connection closed by 20.229.252.112 port 49756 Apr 24 00:18:15.243890 sshd-session[5666]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:15.249545 systemd[1]: sshd@18-172.234.215.230:22-20.229.252.112:49756.service: Deactivated successfully. Apr 24 00:18:15.253149 systemd[1]: session-16.scope: Deactivated successfully. Apr 24 00:18:15.254609 systemd-logind[1555]: Session 16 logged out. Waiting for processes to exit. Apr 24 00:18:15.257822 systemd-logind[1555]: Removed session 16. Apr 24 00:18:15.357752 systemd[1]: Started sshd@19-172.234.215.230:22-20.229.252.112:49760.service - OpenSSH per-connection server daemon (20.229.252.112:49760). Apr 24 00:18:15.904355 sshd[5679]: Accepted publickey for core from 20.229.252.112 port 49760 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:15.906171 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:15.912264 systemd-logind[1555]: New session 17 of user core. Apr 24 00:18:15.916776 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 24 00:18:16.272397 sshd[5704]: Connection closed by 20.229.252.112 port 49760 Apr 24 00:18:16.273261 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:16.278944 systemd-logind[1555]: Session 17 logged out. Waiting for processes to exit. Apr 24 00:18:16.279624 systemd[1]: sshd@19-172.234.215.230:22-20.229.252.112:49760.service: Deactivated successfully. Apr 24 00:18:16.282316 systemd[1]: session-17.scope: Deactivated successfully. Apr 24 00:18:16.285166 systemd-logind[1555]: Removed session 17. Apr 24 00:18:17.067664 kubelet[2756]: E0424 00:18:17.067498 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:18:21.387022 systemd[1]: Started sshd@20-172.234.215.230:22-20.229.252.112:54354.service - OpenSSH per-connection server daemon (20.229.252.112:54354). Apr 24 00:18:21.933299 sshd[5741]: Accepted publickey for core from 20.229.252.112 port 54354 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:21.935029 sshd-session[5741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:21.941479 systemd-logind[1555]: New session 18 of user core. Apr 24 00:18:21.945749 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 24 00:18:22.299392 sshd[5744]: Connection closed by 20.229.252.112 port 54354 Apr 24 00:18:22.301046 sshd-session[5741]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:22.306493 systemd-logind[1555]: Session 18 logged out. Waiting for processes to exit. Apr 24 00:18:22.306927 systemd[1]: sshd@20-172.234.215.230:22-20.229.252.112:54354.service: Deactivated successfully. Apr 24 00:18:22.310279 systemd[1]: session-18.scope: Deactivated successfully. Apr 24 00:18:22.312318 systemd-logind[1555]: Removed session 18. Apr 24 00:18:25.068185 kubelet[2756]: E0424 00:18:25.068140 2756 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.17 172.232.0.16 172.232.0.21" Apr 24 00:18:27.411857 systemd[1]: Started sshd@21-172.234.215.230:22-20.229.252.112:33468.service - OpenSSH per-connection server daemon (20.229.252.112:33468). Apr 24 00:18:27.937122 sshd[5780]: Accepted publickey for core from 20.229.252.112 port 33468 ssh2: RSA SHA256:9R+vUR7rY0NpHfkGjw7iRXt+FTpnhyxQuXdLAz2YbcI Apr 24 00:18:27.939157 sshd-session[5780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 24 00:18:27.944159 systemd-logind[1555]: New session 19 of user core. Apr 24 00:18:27.953740 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 24 00:18:28.287725 sshd[5783]: Connection closed by 20.229.252.112 port 33468 Apr 24 00:18:28.289834 sshd-session[5780]: pam_unix(sshd:session): session closed for user core Apr 24 00:18:28.294155 systemd[1]: sshd@21-172.234.215.230:22-20.229.252.112:33468.service: Deactivated successfully. Apr 24 00:18:28.296948 systemd[1]: session-19.scope: Deactivated successfully. Apr 24 00:18:28.297764 systemd-logind[1555]: Session 19 logged out. Waiting for processes to exit. Apr 24 00:18:28.299581 systemd-logind[1555]: Removed session 19.