Apr 21 10:16:16.965032 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:16:16.965070 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:16.965092 kernel: BIOS-provided physical RAM map: Apr 21 10:16:16.965103 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:16:16.965113 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 21 10:16:16.965123 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 21 10:16:16.965137 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 21 10:16:16.965149 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 21 10:16:16.965161 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 21 10:16:16.965175 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 21 10:16:16.965186 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 21 10:16:16.965198 kernel: NX (Execute Disable) protection: active Apr 21 10:16:16.965210 kernel: APIC: Static calls initialized Apr 21 10:16:16.965224 kernel: efi: EFI v2.7 by EDK II Apr 21 10:16:16.965241 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 21 10:16:16.965260 kernel: SMBIOS 2.7 present. Apr 21 10:16:16.965274 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 21 10:16:16.965289 kernel: Hypervisor detected: KVM Apr 21 10:16:16.965303 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:16:16.965318 kernel: kvm-clock: using sched offset of 4213251075 cycles Apr 21 10:16:16.965331 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:16:16.965345 kernel: tsc: Detected 2500.004 MHz processor Apr 21 10:16:16.965358 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:16:16.965371 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:16:16.965383 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 21 10:16:16.965399 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:16:16.965412 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:16:16.965425 kernel: Using GB pages for direct mapping Apr 21 10:16:16.965438 kernel: Secure boot disabled Apr 21 10:16:16.965451 kernel: ACPI: Early table checksum verification disabled Apr 21 10:16:16.965463 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 21 10:16:16.965512 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 21 10:16:16.965525 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 21 10:16:16.965538 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 21 10:16:16.965554 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 21 10:16:16.965567 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 21 10:16:16.965580 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 21 10:16:16.965593 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 21 10:16:16.965606 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 21 10:16:16.965620 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 21 10:16:16.965638 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:16:16.965655 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:16:16.965669 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 21 10:16:16.965683 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 21 10:16:16.965697 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 21 10:16:16.965711 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 21 10:16:16.965724 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 21 10:16:16.965738 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 21 10:16:16.965754 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 21 10:16:16.965767 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 21 10:16:16.965781 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 21 10:16:16.965795 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 21 10:16:16.965809 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 21 10:16:16.965823 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 21 10:16:16.965837 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 10:16:16.965851 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 10:16:16.965865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 21 10:16:16.965882 kernel: NUMA: Initialized distance table, cnt=1 Apr 21 10:16:16.965896 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 21 10:16:16.965909 kernel: Zone ranges: Apr 21 10:16:16.965923 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:16:16.965937 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 21 10:16:16.965951 kernel: Normal empty Apr 21 10:16:16.965965 kernel: Movable zone start for each node Apr 21 10:16:16.965979 kernel: Early memory node ranges Apr 21 10:16:16.965993 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:16:16.966012 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 21 10:16:16.966026 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 21 10:16:16.966040 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 21 10:16:16.966055 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:16:16.966069 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:16:16.966084 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:16:16.966098 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 21 10:16:16.966113 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 21 10:16:16.966126 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:16:16.966143 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 21 10:16:16.966158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:16:16.966172 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:16:16.966187 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:16:16.966202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:16:16.966217 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:16:16.966231 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:16:16.966246 kernel: TSC deadline timer available Apr 21 10:16:16.966260 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:16:16.966275 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:16:16.966292 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 21 10:16:16.966318 kernel: Booting paravirtualized kernel on KVM Apr 21 10:16:16.966333 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:16:16.966348 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:16:16.966362 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:16:16.966377 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:16:16.966391 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:16:16.966406 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:16:16.966421 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:16:16.966441 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:16.966456 kernel: random: crng init done Apr 21 10:16:16.966492 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:16:16.966509 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 10:16:16.966539 kernel: Fallback order for Node 0: 0 Apr 21 10:16:16.966570 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 21 10:16:16.966590 kernel: Policy zone: DMA32 Apr 21 10:16:16.966602 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:16:16.966620 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 21 10:16:16.966635 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:16:16.966650 kernel: Kernel/User page tables isolation: enabled Apr 21 10:16:16.966665 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:16:16.966677 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:16:16.966689 kernel: Dynamic Preempt: voluntary Apr 21 10:16:16.966703 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:16:16.966719 kernel: rcu: RCU event tracing is enabled. Apr 21 10:16:16.966734 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:16:16.966751 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:16:16.966763 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:16:16.966777 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:16:16.966792 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:16:16.966804 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:16:16.966822 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:16:16.966841 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:16:16.966868 kernel: Console: colour dummy device 80x25 Apr 21 10:16:16.966881 kernel: printk: console [tty0] enabled Apr 21 10:16:16.966894 kernel: printk: console [ttyS0] enabled Apr 21 10:16:16.966908 kernel: ACPI: Core revision 20230628 Apr 21 10:16:16.966925 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 21 10:16:16.966941 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:16:16.966954 kernel: x2apic enabled Apr 21 10:16:16.966969 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:16:16.966984 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Apr 21 10:16:16.967003 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004) Apr 21 10:16:16.967019 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 10:16:16.967033 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 10:16:16.967047 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:16:16.967063 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:16:16.967080 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:16:16.967096 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:16:16.967113 kernel: RETBleed: Vulnerable Apr 21 10:16:16.967129 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:16:16.967145 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:16:16.967164 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:16:16.967180 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:16:16.967196 kernel: active return thunk: its_return_thunk Apr 21 10:16:16.967212 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:16:16.967228 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:16:16.967245 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:16:16.967261 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:16:16.967277 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 21 10:16:16.967293 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 21 10:16:16.967310 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:16:16.967326 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:16:16.967344 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:16:16.967361 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:16:16.967377 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:16:16.967393 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 21 10:16:16.967410 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 21 10:16:16.967426 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 21 10:16:16.967441 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 21 10:16:16.967455 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 21 10:16:16.967547 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 21 10:16:16.967565 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 21 10:16:16.967580 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:16:16.967595 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:16:16.967615 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:16:16.967631 kernel: landlock: Up and running. Apr 21 10:16:16.967646 kernel: SELinux: Initializing. Apr 21 10:16:16.967660 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:16:16.967675 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:16:16.967690 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 10:16:16.967706 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:16.967722 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:16.967738 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:16:16.967754 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 10:16:16.967773 kernel: signal: max sigframe size: 3632 Apr 21 10:16:16.967788 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:16:16.967805 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:16:16.967820 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:16:16.967835 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:16:16.967851 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:16:16.967866 kernel: .... node #0, CPUs: #1 Apr 21 10:16:16.967882 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 21 10:16:16.967899 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 10:16:16.967917 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:16:16.967933 kernel: smpboot: Max logical packages: 1 Apr 21 10:16:16.967948 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS) Apr 21 10:16:16.967963 kernel: devtmpfs: initialized Apr 21 10:16:16.967979 kernel: x86/mm: Memory block size: 128MB Apr 21 10:16:16.967994 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 21 10:16:16.968010 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:16:16.968026 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:16:16.968041 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:16:16.968060 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:16:16.968075 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:16:16.968091 kernel: audit: type=2000 audit(1776766576.923:1): state=initialized audit_enabled=0 res=1 Apr 21 10:16:16.968107 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:16:16.968122 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:16:16.968138 kernel: cpuidle: using governor menu Apr 21 10:16:16.968153 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:16:16.968169 kernel: dca service started, version 1.12.1 Apr 21 10:16:16.968185 kernel: PCI: Using configuration type 1 for base access Apr 21 10:16:16.968203 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:16:16.968219 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:16:16.968234 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:16:16.968249 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:16:16.968265 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:16:16.968281 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:16:16.968296 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:16:16.968312 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:16:16.968327 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 21 10:16:16.968345 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:16:16.968361 kernel: ACPI: Interpreter enabled Apr 21 10:16:16.968376 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:16:16.968391 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:16:16.968406 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:16:16.968423 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:16:16.968436 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 21 10:16:16.968450 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:16:16.968715 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:16:16.968890 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 21 10:16:16.969025 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 21 10:16:16.969043 kernel: acpiphp: Slot [3] registered Apr 21 10:16:16.969059 kernel: acpiphp: Slot [4] registered Apr 21 10:16:16.969073 kernel: acpiphp: Slot [5] registered Apr 21 10:16:16.969088 kernel: acpiphp: Slot [6] registered Apr 21 10:16:16.969103 kernel: acpiphp: Slot [7] registered Apr 21 10:16:16.969122 kernel: acpiphp: Slot [8] registered Apr 21 10:16:16.969136 kernel: acpiphp: Slot [9] registered Apr 21 10:16:16.969151 kernel: acpiphp: Slot [10] registered Apr 21 10:16:16.969166 kernel: acpiphp: Slot [11] registered Apr 21 10:16:16.969181 kernel: acpiphp: Slot [12] registered Apr 21 10:16:16.969196 kernel: acpiphp: Slot [13] registered Apr 21 10:16:16.969211 kernel: acpiphp: Slot [14] registered Apr 21 10:16:16.969226 kernel: acpiphp: Slot [15] registered Apr 21 10:16:16.969241 kernel: acpiphp: Slot [16] registered Apr 21 10:16:16.969259 kernel: acpiphp: Slot [17] registered Apr 21 10:16:16.969274 kernel: acpiphp: Slot [18] registered Apr 21 10:16:16.969289 kernel: acpiphp: Slot [19] registered Apr 21 10:16:16.969305 kernel: acpiphp: Slot [20] registered Apr 21 10:16:16.969320 kernel: acpiphp: Slot [21] registered Apr 21 10:16:16.969335 kernel: acpiphp: Slot [22] registered Apr 21 10:16:16.969350 kernel: acpiphp: Slot [23] registered Apr 21 10:16:16.969366 kernel: acpiphp: Slot [24] registered Apr 21 10:16:16.969382 kernel: acpiphp: Slot [25] registered Apr 21 10:16:16.969397 kernel: acpiphp: Slot [26] registered Apr 21 10:16:16.969416 kernel: acpiphp: Slot [27] registered Apr 21 10:16:16.969431 kernel: acpiphp: Slot [28] registered Apr 21 10:16:16.969447 kernel: acpiphp: Slot [29] registered Apr 21 10:16:16.969463 kernel: acpiphp: Slot [30] registered Apr 21 10:16:16.969502 kernel: acpiphp: Slot [31] registered Apr 21 10:16:16.969517 kernel: PCI host bridge to bus 0000:00 Apr 21 10:16:16.969703 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:16:16.969836 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:16:16.969971 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:16:16.970095 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 21 10:16:16.970211 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 21 10:16:16.970339 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:16:16.970509 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 21 10:16:16.970671 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 21 10:16:16.970824 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 21 10:16:16.970971 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 21 10:16:16.971110 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 21 10:16:16.971247 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 21 10:16:16.971380 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 21 10:16:16.971550 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 21 10:16:16.971686 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 21 10:16:16.971821 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 21 10:16:16.971980 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 21 10:16:16.972771 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 21 10:16:16.972928 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:16:16.973070 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 21 10:16:16.973211 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:16:16.973362 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 21 10:16:16.973586 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 21 10:16:16.973741 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 21 10:16:16.973883 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 21 10:16:16.973905 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:16:16.973923 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:16:16.973940 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:16:16.973957 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:16:16.973973 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 21 10:16:16.973994 kernel: iommu: Default domain type: Translated Apr 21 10:16:16.974011 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:16:16.974028 kernel: efivars: Registered efivars operations Apr 21 10:16:16.974044 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:16:16.974060 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:16:16.974077 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 21 10:16:16.974093 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 21 10:16:16.974234 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 21 10:16:16.974457 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 21 10:16:16.974619 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:16:16.974639 kernel: vgaarb: loaded Apr 21 10:16:16.974657 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 21 10:16:16.974673 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 21 10:16:16.974690 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:16:16.974707 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:16:16.974723 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:16:16.974740 kernel: pnp: PnP ACPI init Apr 21 10:16:16.974760 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:16:16.974777 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:16:16.974794 kernel: NET: Registered PF_INET protocol family Apr 21 10:16:16.974811 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:16:16.974827 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 21 10:16:16.974844 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:16:16.974860 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 10:16:16.974877 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 21 10:16:16.974894 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 21 10:16:16.974914 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:16:16.974930 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:16:16.974946 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:16:16.974962 kernel: NET: Registered PF_XDP protocol family Apr 21 10:16:16.975097 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:16:16.975224 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:16:16.975350 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:16:16.975498 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 21 10:16:16.975629 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 21 10:16:16.975773 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 21 10:16:16.975795 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:16:16.975813 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:16:16.975830 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns Apr 21 10:16:16.975847 kernel: clocksource: Switched to clocksource tsc Apr 21 10:16:16.975864 kernel: Initialise system trusted keyrings Apr 21 10:16:16.975880 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 21 10:16:16.975897 kernel: Key type asymmetric registered Apr 21 10:16:16.975917 kernel: Asymmetric key parser 'x509' registered Apr 21 10:16:16.975933 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:16:16.975950 kernel: io scheduler mq-deadline registered Apr 21 10:16:16.975966 kernel: io scheduler kyber registered Apr 21 10:16:16.975983 kernel: io scheduler bfq registered Apr 21 10:16:16.975999 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:16:16.976016 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:16:16.976033 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:16:16.976049 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:16:16.976069 kernel: i8042: Warning: Keylock active Apr 21 10:16:16.976085 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:16:16.976102 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:16:16.976248 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 21 10:16:16.976378 kernel: rtc_cmos 00:00: registered as rtc0 Apr 21 10:16:16.976615 kernel: rtc_cmos 00:00: setting system clock to 2026-04-21T10:16:16 UTC (1776766576) Apr 21 10:16:16.976754 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 21 10:16:16.976779 kernel: intel_pstate: CPU model not supported Apr 21 10:16:16.976796 kernel: efifb: probing for efifb Apr 21 10:16:16.976811 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 21 10:16:16.976827 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 21 10:16:16.976844 kernel: efifb: scrolling: redraw Apr 21 10:16:16.976859 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:16:16.976874 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:16:16.976890 kernel: fb0: EFI VGA frame buffer device Apr 21 10:16:16.976905 kernel: pstore: Using crash dump compression: deflate Apr 21 10:16:16.976921 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:16:16.976940 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:16:16.976955 kernel: Segment Routing with IPv6 Apr 21 10:16:16.976972 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:16:16.976988 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:16:16.977004 kernel: Key type dns_resolver registered Apr 21 10:16:16.977021 kernel: IPI shorthand broadcast: enabled Apr 21 10:16:16.977065 kernel: sched_clock: Marking stable (489003054, 133227421)->(692401802, -70171327) Apr 21 10:16:16.977085 kernel: registered taskstats version 1 Apr 21 10:16:16.977102 kernel: Loading compiled-in X.509 certificates Apr 21 10:16:16.977122 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:16:16.977138 kernel: Key type .fscrypt registered Apr 21 10:16:16.977155 kernel: Key type fscrypt-provisioning registered Apr 21 10:16:16.977171 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:16:16.977188 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:16:16.977204 kernel: ima: No architecture policies found Apr 21 10:16:16.977219 kernel: clk: Disabling unused clocks Apr 21 10:16:16.977232 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:16:16.977246 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:16:16.977264 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:16:16.977279 kernel: Run /init as init process Apr 21 10:16:16.977294 kernel: with arguments: Apr 21 10:16:16.977309 kernel: /init Apr 21 10:16:16.977324 kernel: with environment: Apr 21 10:16:16.977339 kernel: HOME=/ Apr 21 10:16:16.977353 kernel: TERM=linux Apr 21 10:16:16.977371 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:16:16.977392 systemd[1]: Detected virtualization amazon. Apr 21 10:16:16.977408 systemd[1]: Detected architecture x86-64. Apr 21 10:16:16.977423 systemd[1]: Running in initrd. Apr 21 10:16:16.977439 systemd[1]: No hostname configured, using default hostname. Apr 21 10:16:16.977454 systemd[1]: Hostname set to . Apr 21 10:16:16.977470 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:16:16.977500 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:16:16.977515 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:16.977535 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:16.977552 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:16:16.977568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:16:16.977584 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:16:16.977604 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:16:16.977626 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:16:16.977642 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:16:16.977658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:16.977674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:16.977689 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:16:16.977706 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:16:16.977722 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:16:16.977741 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:16:16.977757 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:16:16.977773 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:16:16.977790 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:16:16.977805 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:16:16.977821 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:16.977837 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:16.977853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:16.977869 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:16:16.977888 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:16:16.977904 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:16:16.977920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:16:16.977936 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:16:16.977952 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:16:16.977968 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:16:16.977984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:16.978000 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:16:16.978019 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:16.978035 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:16:16.978051 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:16:16.978098 systemd-journald[179]: Collecting audit messages is disabled. Apr 21 10:16:16.978136 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:16.978153 systemd-journald[179]: Journal started Apr 21 10:16:16.978189 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2937586da093a4b829af20580496ea) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:16:16.966877 systemd-modules-load[180]: Inserted module 'overlay' Apr 21 10:16:16.984662 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:16:17.008172 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:16:17.004718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:17.019454 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:16:17.019511 kernel: Bridge firewalling registered Apr 21 10:16:17.013534 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 21 10:16:17.014684 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:16:17.022735 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:17.025695 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:16:17.033178 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:16:17.044756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:16:17.045746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:17.050530 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:17.057759 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:16:17.062694 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:17.064706 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:17.074702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:16:17.087644 dracut-cmdline[210]: dracut-dracut-053 Apr 21 10:16:17.091918 dracut-cmdline[210]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:16:17.130043 systemd-resolved[214]: Positive Trust Anchors: Apr 21 10:16:17.131111 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:16:17.131176 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:16:17.139837 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 21 10:16:17.141252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:16:17.142510 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:17.184510 kernel: SCSI subsystem initialized Apr 21 10:16:17.194511 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:16:17.205509 kernel: iscsi: registered transport (tcp) Apr 21 10:16:17.227522 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:16:17.227605 kernel: QLogic iSCSI HBA Driver Apr 21 10:16:17.267085 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:16:17.273727 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:16:17.301016 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:16:17.301093 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:16:17.301116 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:16:17.344504 kernel: raid6: avx512x4 gen() 15375 MB/s Apr 21 10:16:17.362500 kernel: raid6: avx512x2 gen() 15391 MB/s Apr 21 10:16:17.380499 kernel: raid6: avx512x1 gen() 15377 MB/s Apr 21 10:16:17.398501 kernel: raid6: avx2x4 gen() 15260 MB/s Apr 21 10:16:17.416496 kernel: raid6: avx2x2 gen() 15338 MB/s Apr 21 10:16:17.434729 kernel: raid6: avx2x1 gen() 11689 MB/s Apr 21 10:16:17.434780 kernel: raid6: using algorithm avx512x2 gen() 15391 MB/s Apr 21 10:16:17.453746 kernel: raid6: .... xor() 24862 MB/s, rmw enabled Apr 21 10:16:17.453805 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:16:17.475505 kernel: xor: automatically using best checksumming function avx Apr 21 10:16:17.635504 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:16:17.646501 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:16:17.654663 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:17.669638 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 21 10:16:17.674710 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:17.684347 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:16:17.702197 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Apr 21 10:16:17.732926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:16:17.737699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:16:17.789923 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:17.799786 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:16:17.824817 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:16:17.827227 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:16:17.829416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:17.830544 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:16:17.837022 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:16:17.863538 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:16:17.891493 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:16:17.911178 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:16:17.912214 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:17.917896 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 21 10:16:17.918154 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 21 10:16:17.919204 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:17.921002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:17.926259 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 21 10:16:17.921228 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:17.921905 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:17.931115 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:17.937598 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:75:d5:98:8c:89 Apr 21 10:16:17.947281 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:16:17.952907 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:16:17.952956 kernel: AES CTR mode by8 optimization enabled Apr 21 10:16:17.955959 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:17.956931 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:17.968769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:17.972810 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 21 10:16:17.973041 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 21 10:16:17.988493 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 21 10:16:18.000267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:16:18.000335 kernel: GPT:9289727 != 33554431 Apr 21 10:16:18.000354 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:16:18.000372 kernel: GPT:9289727 != 33554431 Apr 21 10:16:18.000399 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:16:18.000418 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:16:18.007374 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:18.015159 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:16:18.033633 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:18.080505 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (457) Apr 21 10:16:18.108508 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (442) Apr 21 10:16:18.157453 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 21 10:16:18.173116 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:16:18.189121 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 21 10:16:18.204833 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 21 10:16:18.205514 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 21 10:16:18.218939 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:16:18.226224 disk-uuid[631]: Primary Header is updated. Apr 21 10:16:18.226224 disk-uuid[631]: Secondary Entries is updated. Apr 21 10:16:18.226224 disk-uuid[631]: Secondary Header is updated. Apr 21 10:16:18.233498 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:16:18.242514 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:16:18.250528 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:16:19.250694 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:16:19.250767 disk-uuid[632]: The operation has completed successfully. Apr 21 10:16:19.396335 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:16:19.396467 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:16:19.412690 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:16:19.417520 sh[979]: Success Apr 21 10:16:19.438495 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 10:16:19.544254 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:16:19.552613 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:16:19.556722 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:16:19.591640 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:16:19.591714 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:19.594877 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:16:19.594936 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:16:19.596244 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:16:19.699501 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:16:19.722006 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:16:19.723456 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:16:19.735766 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:16:19.739695 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:16:19.769884 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:19.769962 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:19.769985 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:16:19.779503 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:16:19.795506 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:19.795445 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:16:19.804561 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:16:19.810747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:16:19.849853 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:16:19.858750 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:16:19.879998 systemd-networkd[1171]: lo: Link UP Apr 21 10:16:19.880010 systemd-networkd[1171]: lo: Gained carrier Apr 21 10:16:19.881803 systemd-networkd[1171]: Enumeration completed Apr 21 10:16:19.881932 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:16:19.882425 systemd-networkd[1171]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:19.882429 systemd-networkd[1171]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:16:19.884531 systemd[1]: Reached target network.target - Network. Apr 21 10:16:19.886145 systemd-networkd[1171]: eth0: Link UP Apr 21 10:16:19.886150 systemd-networkd[1171]: eth0: Gained carrier Apr 21 10:16:19.886164 systemd-networkd[1171]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:19.896592 systemd-networkd[1171]: eth0: DHCPv4 address 172.31.28.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:16:20.249151 ignition[1116]: Ignition 2.19.0 Apr 21 10:16:20.249166 ignition[1116]: Stage: fetch-offline Apr 21 10:16:20.249438 ignition[1116]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:20.249450 ignition[1116]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:20.249965 ignition[1116]: Ignition finished successfully Apr 21 10:16:20.252381 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:16:20.257701 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:16:20.272840 ignition[1179]: Ignition 2.19.0 Apr 21 10:16:20.272853 ignition[1179]: Stage: fetch Apr 21 10:16:20.273328 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:20.273343 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:20.273464 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:20.281247 ignition[1179]: PUT result: OK Apr 21 10:16:20.283189 ignition[1179]: parsed url from cmdline: "" Apr 21 10:16:20.283199 ignition[1179]: no config URL provided Apr 21 10:16:20.283210 ignition[1179]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:16:20.283225 ignition[1179]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:16:20.283246 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:20.283745 ignition[1179]: PUT result: OK Apr 21 10:16:20.283793 ignition[1179]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 21 10:16:20.284745 ignition[1179]: GET result: OK Apr 21 10:16:20.284837 ignition[1179]: parsing config with SHA512: b7758ac9b405a9ec7267b26cce90d3a0333e49a070a5935c3207f104ee247082c273e7731087709c8d33beb8ed87a5816df8e75ae803fbaff886c9ec6da7cab1 Apr 21 10:16:20.290540 unknown[1179]: fetched base config from "system" Apr 21 10:16:20.290556 unknown[1179]: fetched base config from "system" Apr 21 10:16:20.290567 unknown[1179]: fetched user config from "aws" Apr 21 10:16:20.291412 ignition[1179]: fetch: fetch complete Apr 21 10:16:20.291420 ignition[1179]: fetch: fetch passed Apr 21 10:16:20.291495 ignition[1179]: Ignition finished successfully Apr 21 10:16:20.293721 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:16:20.299711 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:16:20.315695 ignition[1185]: Ignition 2.19.0 Apr 21 10:16:20.315709 ignition[1185]: Stage: kargs Apr 21 10:16:20.316167 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:20.316183 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:20.316302 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:20.317984 ignition[1185]: PUT result: OK Apr 21 10:16:20.322140 ignition[1185]: kargs: kargs passed Apr 21 10:16:20.322226 ignition[1185]: Ignition finished successfully Apr 21 10:16:20.324582 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:16:20.328737 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:16:20.347003 ignition[1192]: Ignition 2.19.0 Apr 21 10:16:20.347018 ignition[1192]: Stage: disks Apr 21 10:16:20.347555 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:20.347570 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:20.347695 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:20.349532 ignition[1192]: PUT result: OK Apr 21 10:16:20.352323 ignition[1192]: disks: disks passed Apr 21 10:16:20.352383 ignition[1192]: Ignition finished successfully Apr 21 10:16:20.354151 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:16:20.355264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:16:20.355924 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:16:20.356529 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:16:20.356867 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:16:20.357404 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:16:20.364761 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:16:20.435987 systemd-fsck[1200]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:16:20.440181 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:16:20.445596 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:16:20.552531 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:16:20.553162 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:16:20.554282 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:16:20.566628 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:16:20.569593 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:16:20.572727 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:16:20.572804 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:16:20.572841 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:16:20.586047 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:16:20.592520 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1219) Apr 21 10:16:20.593786 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:16:20.600508 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:20.600583 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:20.600606 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:16:20.608497 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:16:20.610678 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:16:20.978980 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:16:20.997433 initrd-setup-root[1251]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:16:21.003241 initrd-setup-root[1258]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:16:21.008526 initrd-setup-root[1265]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:16:21.092660 systemd-networkd[1171]: eth0: Gained IPv6LL Apr 21 10:16:21.268713 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:16:21.274595 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:16:21.279492 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:16:21.286522 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:16:21.288744 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:21.320289 ignition[1332]: INFO : Ignition 2.19.0 Apr 21 10:16:21.321825 ignition[1332]: INFO : Stage: mount Apr 21 10:16:21.323157 ignition[1332]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:21.323157 ignition[1332]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:21.324424 ignition[1332]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:21.327132 ignition[1332]: INFO : PUT result: OK Apr 21 10:16:21.331526 ignition[1332]: INFO : mount: mount passed Apr 21 10:16:21.332024 ignition[1332]: INFO : Ignition finished successfully Apr 21 10:16:21.334202 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:16:21.335282 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:16:21.339625 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:16:21.367775 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:16:21.387501 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1345) Apr 21 10:16:21.391635 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:16:21.391711 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:16:21.391733 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:16:21.398510 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:16:21.401606 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:16:21.423755 ignition[1362]: INFO : Ignition 2.19.0 Apr 21 10:16:21.423755 ignition[1362]: INFO : Stage: files Apr 21 10:16:21.425205 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:21.425205 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:21.425205 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:21.426461 ignition[1362]: INFO : PUT result: OK Apr 21 10:16:21.430955 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:16:21.431878 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:16:21.431878 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:16:21.449232 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:16:21.450264 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:16:21.450264 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:16:21.449801 unknown[1362]: wrote ssh authorized keys file for user: core Apr 21 10:16:21.460837 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:16:21.461965 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 21 10:16:21.461965 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:16:21.461965 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:16:21.555254 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 21 10:16:21.714728 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:21.716166 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:21.726089 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:16:22.208764 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 21 10:16:22.604729 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:16:22.604729 ignition[1362]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 21 10:16:22.607860 ignition[1362]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:16:22.609104 ignition[1362]: INFO : files: files passed Apr 21 10:16:22.609104 ignition[1362]: INFO : Ignition finished successfully Apr 21 10:16:22.611219 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:16:22.621718 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:16:22.626185 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:16:22.627419 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:16:22.629419 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:16:22.641363 initrd-setup-root-after-ignition[1390]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:22.641363 initrd-setup-root-after-ignition[1390]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:22.644571 initrd-setup-root-after-ignition[1394]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:16:22.647137 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:16:22.647914 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:16:22.655686 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:16:22.681828 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:16:22.681960 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:16:22.683288 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:16:22.684385 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:16:22.685233 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:16:22.690650 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:16:22.704619 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:16:22.709657 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:16:22.722630 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:22.723803 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:22.724779 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:16:22.725237 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:16:22.725391 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:16:22.726501 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:16:22.727272 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:16:22.728055 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:16:22.728832 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:16:22.729589 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:16:22.730454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:16:22.731174 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:16:22.731952 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:16:22.732713 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:16:22.733850 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:16:22.734682 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:16:22.734862 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:16:22.735944 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:22.736743 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:22.737431 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:16:22.737590 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:22.738256 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:16:22.738624 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:16:22.739606 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:16:22.739783 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:16:22.740874 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:16:22.741025 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:16:22.747840 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:16:22.749239 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:16:22.749447 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:22.752701 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:16:22.755119 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:16:22.755727 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:22.759756 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:16:22.759943 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:16:22.767333 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:16:22.769037 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:16:22.770353 ignition[1414]: INFO : Ignition 2.19.0 Apr 21 10:16:22.770353 ignition[1414]: INFO : Stage: umount Apr 21 10:16:22.773438 ignition[1414]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:16:22.773438 ignition[1414]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:16:22.773438 ignition[1414]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:16:22.773438 ignition[1414]: INFO : PUT result: OK Apr 21 10:16:22.777667 ignition[1414]: INFO : umount: umount passed Apr 21 10:16:22.778405 ignition[1414]: INFO : Ignition finished successfully Apr 21 10:16:22.780894 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:16:22.781050 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:16:22.783059 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:16:22.783181 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:16:22.783779 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:16:22.783840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:16:22.784679 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:16:22.784737 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:16:22.788034 systemd[1]: Stopped target network.target - Network. Apr 21 10:16:22.788454 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:16:22.788540 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:16:22.789002 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:16:22.789419 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:16:22.789542 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:22.790466 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:16:22.790922 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:16:22.791381 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:16:22.791433 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:16:22.792600 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:16:22.792647 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:16:22.793905 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:16:22.793964 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:16:22.794429 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:16:22.794500 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:16:22.795130 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:16:22.796691 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:16:22.799119 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:16:22.799520 systemd-networkd[1171]: eth0: DHCPv6 lease lost Apr 21 10:16:22.803215 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:16:22.803353 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:16:22.804578 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:16:22.804712 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:16:22.808005 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:16:22.808314 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:16:22.811281 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:16:22.811360 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:22.812039 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:16:22.812106 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:16:22.819666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:16:22.820250 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:16:22.820332 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:16:22.821609 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:16:22.821671 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:22.822102 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:16:22.822160 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:22.822855 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:16:22.822912 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:22.824043 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:22.836457 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:16:22.836691 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:22.839200 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:16:22.839331 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:16:22.841593 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:16:22.841667 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:22.842639 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:16:22.842688 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:22.843325 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:16:22.843389 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:16:22.844487 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:16:22.844551 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:16:22.845605 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:16:22.845664 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:16:22.852644 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:16:22.853209 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:16:22.853289 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:22.855245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:22.855319 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:22.861774 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:16:22.861930 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:16:22.863022 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:16:22.868694 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:16:22.896623 systemd[1]: Switching root. Apr 21 10:16:22.929595 systemd-journald[179]: Journal stopped Apr 21 10:16:24.931331 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 21 10:16:24.931425 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:16:24.931453 kernel: SELinux: policy capability open_perms=1 Apr 21 10:16:24.931502 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:16:24.931522 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:16:24.931541 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:16:24.931562 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:16:24.931584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:16:24.931605 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:16:24.931636 kernel: audit: type=1403 audit(1776766583.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:16:24.931659 systemd[1]: Successfully loaded SELinux policy in 55.972ms. Apr 21 10:16:24.931684 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.931ms. Apr 21 10:16:24.931712 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:16:24.931733 systemd[1]: Detected virtualization amazon. Apr 21 10:16:24.931755 systemd[1]: Detected architecture x86-64. Apr 21 10:16:24.931777 systemd[1]: Detected first boot. Apr 21 10:16:24.931796 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:16:24.931814 zram_generator::config[1473]: No configuration found. Apr 21 10:16:24.931840 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:16:24.931858 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:16:24.931878 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 21 10:16:24.931910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:16:24.931933 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:16:24.931955 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:16:24.931977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:16:24.932002 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:16:24.932031 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:16:24.932053 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:16:24.932078 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:16:24.932102 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:16:24.932125 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:16:24.932149 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:16:24.932172 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:16:24.932194 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:16:24.932221 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:16:24.932246 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:16:24.932268 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:16:24.932291 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:16:24.932320 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:16:24.932343 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:16:24.932364 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:16:24.932386 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:16:24.932409 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:16:24.932433 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:16:24.932455 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:16:24.932500 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:16:24.932519 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:16:24.932539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:16:24.932558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:16:24.932577 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:16:24.932598 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:16:24.932617 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:16:24.932642 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:16:24.932661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:24.932682 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:16:24.932704 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:16:24.932727 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:16:24.932749 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:16:24.932772 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:24.932795 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:16:24.932820 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:16:24.932842 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:24.932863 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:16:24.932885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:16:24.932906 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:16:24.932928 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:24.932949 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:16:24.932972 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 21 10:16:24.932994 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 21 10:16:24.933019 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:16:24.933040 kernel: loop: module loaded Apr 21 10:16:24.933063 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:16:24.933084 kernel: fuse: init (API version 7.39) Apr 21 10:16:24.933105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:16:24.933127 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:16:24.933150 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:16:24.933172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:24.933228 systemd-journald[1577]: Collecting audit messages is disabled. Apr 21 10:16:24.933276 systemd-journald[1577]: Journal started Apr 21 10:16:24.933319 systemd-journald[1577]: Runtime Journal (/run/log/journal/ec2937586da093a4b829af20580496ea) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:16:24.938427 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:16:24.942523 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:16:24.945245 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:16:24.947907 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:16:24.949713 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:16:24.951576 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:16:24.952364 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:16:24.953578 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:16:24.956139 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:16:24.956394 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:16:24.957931 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:24.958171 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:24.960609 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:16:24.960839 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:16:24.964050 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:16:24.965123 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:16:24.965345 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:16:24.966999 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:24.967228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:24.968668 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:16:24.969908 kernel: ACPI: bus type drm_connector registered Apr 21 10:16:24.971251 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:16:24.971519 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:16:24.972629 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:16:24.973959 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:16:24.987373 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:16:24.995176 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:16:24.998973 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:16:24.999699 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:16:25.005724 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:16:25.020744 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:16:25.023404 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:16:25.032650 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:16:25.033684 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:16:25.045691 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:16:25.049415 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:16:25.070271 systemd-journald[1577]: Time spent on flushing to /var/log/journal/ec2937586da093a4b829af20580496ea is 79.980ms for 968 entries. Apr 21 10:16:25.070271 systemd-journald[1577]: System Journal (/var/log/journal/ec2937586da093a4b829af20580496ea) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:16:25.162804 systemd-journald[1577]: Received client request to flush runtime journal. Apr 21 10:16:25.067616 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:16:25.068495 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:16:25.087651 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:16:25.088516 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:16:25.136939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:16:25.150795 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:16:25.169193 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:16:25.190263 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Apr 21 10:16:25.190304 systemd-tmpfiles[1625]: ACLs are not supported, ignoring. Apr 21 10:16:25.190860 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:16:25.199383 udevadm[1634]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:16:25.202131 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:16:25.216797 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:16:25.269387 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:16:25.279745 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:16:25.309874 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Apr 21 10:16:25.310312 systemd-tmpfiles[1648]: ACLs are not supported, ignoring. Apr 21 10:16:25.316978 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:16:25.808305 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:16:25.815696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:16:25.847271 systemd-udevd[1654]: Using default interface naming scheme 'v255'. Apr 21 10:16:25.916759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:16:25.926673 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:16:25.967786 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:16:26.001879 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 21 10:16:26.023063 (udev-worker)[1670]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:16:26.071446 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:16:26.136498 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 21 10:16:26.145725 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:16:26.145799 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 21 10:16:26.149597 kernel: ACPI: button: Sleep Button [SLPF] Apr 21 10:16:26.169513 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 21 10:16:26.203501 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:16:26.214498 systemd-networkd[1659]: lo: Link UP Apr 21 10:16:26.214513 systemd-networkd[1659]: lo: Gained carrier Apr 21 10:16:26.216835 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1665) Apr 21 10:16:26.218089 systemd-networkd[1659]: Enumeration completed Apr 21 10:16:26.219118 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:16:26.220125 systemd-networkd[1659]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:26.220135 systemd-networkd[1659]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:16:26.227127 systemd-networkd[1659]: eth0: Link UP Apr 21 10:16:26.227668 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:16:26.229174 systemd-networkd[1659]: eth0: Gained carrier Apr 21 10:16:26.229202 systemd-networkd[1659]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:16:26.242564 systemd-networkd[1659]: eth0: DHCPv4 address 172.31.28.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:16:26.275796 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:16:26.275349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:26.294699 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:16:26.295157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:26.303863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:16:26.415101 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:16:26.430415 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:16:26.440695 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:16:26.456509 lvm[1779]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:16:26.467086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:16:26.493965 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:16:26.495733 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:16:26.503671 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:16:26.509131 lvm[1785]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:16:26.540166 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:16:26.541782 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:16:26.542679 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:16:26.542731 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:16:26.543354 systemd[1]: Reached target machines.target - Containers. Apr 21 10:16:26.545999 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:16:26.550686 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:16:26.553755 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:16:26.555676 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:26.558793 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:16:26.568673 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:16:26.574767 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:16:26.579745 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:16:26.597352 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:16:26.601105 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:16:26.601501 kernel: loop0: detected capacity change from 0 to 228704 Apr 21 10:16:26.608816 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:16:26.822494 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:16:26.863502 kernel: loop1: detected capacity change from 0 to 140768 Apr 21 10:16:27.000502 kernel: loop2: detected capacity change from 0 to 142488 Apr 21 10:16:27.114501 kernel: loop3: detected capacity change from 0 to 61336 Apr 21 10:16:27.171512 kernel: loop4: detected capacity change from 0 to 228704 Apr 21 10:16:27.206544 kernel: loop5: detected capacity change from 0 to 140768 Apr 21 10:16:27.230548 kernel: loop6: detected capacity change from 0 to 142488 Apr 21 10:16:27.254586 kernel: loop7: detected capacity change from 0 to 61336 Apr 21 10:16:27.276267 (sd-merge)[1806]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 21 10:16:27.276977 (sd-merge)[1806]: Merged extensions into '/usr'. Apr 21 10:16:27.281937 systemd[1]: Reloading requested from client PID 1793 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:16:27.281955 systemd[1]: Reloading... Apr 21 10:16:27.354506 zram_generator::config[1830]: No configuration found. Apr 21 10:16:27.543601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:16:27.629181 systemd[1]: Reloading finished in 346 ms. Apr 21 10:16:27.651662 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:16:27.660884 systemd[1]: Starting ensure-sysext.service... Apr 21 10:16:27.664700 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:16:27.677584 systemd[1]: Reloading requested from client PID 1891 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:16:27.677609 systemd[1]: Reloading... Apr 21 10:16:27.703138 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:16:27.704190 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:16:27.705996 systemd-tmpfiles[1892]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:16:27.707208 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Apr 21 10:16:27.708036 systemd-tmpfiles[1892]: ACLs are not supported, ignoring. Apr 21 10:16:27.729087 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:16:27.731574 systemd-tmpfiles[1892]: Skipping /boot Apr 21 10:16:27.768928 systemd-tmpfiles[1892]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:16:27.768944 systemd-tmpfiles[1892]: Skipping /boot Apr 21 10:16:27.783498 zram_generator::config[1920]: No configuration found. Apr 21 10:16:27.812710 systemd-networkd[1659]: eth0: Gained IPv6LL Apr 21 10:16:27.970559 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:16:28.055034 systemd[1]: Reloading finished in 376 ms. Apr 21 10:16:28.075570 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:16:28.082124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:16:28.097897 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:16:28.101382 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:16:28.105665 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:16:28.117689 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:16:28.125689 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:16:28.144647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:28.145825 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:28.149235 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:28.164429 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:16:28.170870 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:28.171653 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:28.171849 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:28.185709 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:28.185974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:28.193828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:16:28.194075 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:16:28.200871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:28.201313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:28.218934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:16:28.220689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:28.220893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:16:28.221016 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:28.225469 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:16:28.229807 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:28.230067 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:28.243388 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:16:28.254166 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:16:28.255984 augenrules[2016]: No rules Apr 21 10:16:28.256771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:16:28.260903 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:16:28.273866 systemd[1]: Finished ensure-sysext.service. Apr 21 10:16:28.281291 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:28.281602 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:16:28.289803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:16:28.295783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:16:28.300714 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:16:28.311822 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:16:28.312172 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:16:28.313074 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:16:28.314717 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:16:28.315460 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:16:28.332626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:16:28.332916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:16:28.334840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:16:28.337008 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:16:28.338751 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:16:28.340749 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:16:28.368754 systemd-resolved[1988]: Positive Trust Anchors: Apr 21 10:16:28.368773 systemd-resolved[1988]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:16:28.368821 systemd-resolved[1988]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:16:28.374562 systemd-resolved[1988]: Defaulting to hostname 'linux'. Apr 21 10:16:28.376788 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:16:28.379519 systemd[1]: Reached target network.target - Network. Apr 21 10:16:28.380587 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:16:28.381129 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:16:28.422947 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:16:28.424036 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:16:28.448000 ldconfig[1789]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:16:28.455143 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:16:28.464769 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:16:28.476193 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:16:28.477199 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:16:28.478148 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:16:28.479052 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:16:28.480140 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:16:28.480943 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:16:28.481321 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:16:28.481722 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:16:28.481770 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:16:28.482126 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:16:28.483367 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:16:28.485414 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:16:28.487128 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:16:28.489594 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:16:28.490147 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:16:28.490743 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:16:28.491433 systemd[1]: System is tainted: cgroupsv1 Apr 21 10:16:28.491503 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:16:28.491533 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:16:28.495462 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:16:28.499718 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:16:28.510220 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:16:28.513948 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:16:28.523812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:16:28.524372 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:16:28.554172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:16:28.559590 jq[2053]: false Apr 21 10:16:28.559894 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:16:28.576672 systemd[1]: Started ntpd.service - Network Time Service. Apr 21 10:16:28.580881 extend-filesystems[2054]: Found loop4 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found loop5 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found loop6 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found loop7 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p1 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p2 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p3 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found usr Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p4 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p6 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p7 Apr 21 10:16:28.588308 extend-filesystems[2054]: Found nvme0n1p9 Apr 21 10:16:28.588308 extend-filesystems[2054]: Checking size of /dev/nvme0n1p9 Apr 21 10:16:28.599693 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:16:28.605613 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:16:28.610593 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 21 10:16:28.624544 extend-filesystems[2054]: Resized partition /dev/nvme0n1p9 Apr 21 10:16:28.626663 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:16:28.635187 extend-filesystems[2071]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:16:28.632630 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:16:28.644067 dbus-daemon[2052]: [system] SELinux support is enabled Apr 21 10:16:28.655951 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 21 10:16:28.653038 dbus-daemon[2052]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1659 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:16:28.659678 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:16:28.664037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:16:28.683565 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:16:28.698604 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:16:28.700356 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:16:28.728061 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:16:28.728399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:16:28.731039 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:16:28.734285 jq[2088]: true Apr 21 10:16:28.751764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:16:28.752116 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:16:28.760135 coreos-metadata[2050]: Apr 21 10:16:28.759 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:16:28.776276 coreos-metadata[2050]: Apr 21 10:16:28.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 21 10:16:28.779287 jq[2100]: true Apr 21 10:16:28.784727 coreos-metadata[2050]: Apr 21 10:16:28.784 INFO Fetch successful Apr 21 10:16:28.788195 coreos-metadata[2050]: Apr 21 10:16:28.784 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 21 10:16:28.788195 coreos-metadata[2050]: Apr 21 10:16:28.787 INFO Fetch successful Apr 21 10:16:28.788195 coreos-metadata[2050]: Apr 21 10:16:28.787 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 21 10:16:28.792159 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:16:28.799087 coreos-metadata[2050]: Apr 21 10:16:28.798 INFO Fetch successful Apr 21 10:16:28.799087 coreos-metadata[2050]: Apr 21 10:16:28.798 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 21 10:16:28.801114 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:16:28.810794 coreos-metadata[2050]: Apr 21 10:16:28.810 INFO Fetch successful Apr 21 10:16:28.810794 coreos-metadata[2050]: Apr 21 10:16:28.810 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 21 10:16:28.813333 coreos-metadata[2050]: Apr 21 10:16:28.813 INFO Fetch failed with 404: resource not found Apr 21 10:16:28.813333 coreos-metadata[2050]: Apr 21 10:16:28.813 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 21 10:16:28.816376 update_engine[2080]: I20260421 10:16:28.816265 2080 main.cc:92] Flatcar Update Engine starting Apr 21 10:16:28.819084 coreos-metadata[2050]: Apr 21 10:16:28.818 INFO Fetch successful Apr 21 10:16:28.819084 coreos-metadata[2050]: Apr 21 10:16:28.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 21 10:16:28.821572 update_engine[2080]: I20260421 10:16:28.819402 2080 update_check_scheduler.cc:74] Next update check in 3m13s Apr 21 10:16:28.829464 ntpd[2061]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:16:28.833940 coreos-metadata[2050]: Apr 21 10:16:28.830 INFO Fetch successful Apr 21 10:16:28.833940 coreos-metadata[2050]: Apr 21 10:16:28.830 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 21 10:16:28.833940 coreos-metadata[2050]: Apr 21 10:16:28.832 INFO Fetch successful Apr 21 10:16:28.833940 coreos-metadata[2050]: Apr 21 10:16:28.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 21 10:16:28.833940 coreos-metadata[2050]: Apr 21 10:16:28.833 INFO Fetch successful Apr 21 10:16:28.833940 coreos-metadata[2050]: Apr 21 10:16:28.833 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: ---------------------------------------------------- Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: corporation. Support and training for ntp-4 are Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: available at https://www.nwtime.org/support Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: ---------------------------------------------------- Apr 21 10:16:28.834703 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: proto: precision = 0.065 usec (-24) Apr 21 10:16:28.829513 ntpd[2061]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:16:28.853732 coreos-metadata[2050]: Apr 21 10:16:28.834 INFO Fetch successful Apr 21 10:16:28.841863 (ntainerd)[2109]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: basedate set to 2026-04-09 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: gps base set to 2026-04-12 (week 2414) Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listen normally on 3 eth0 172.31.28.26:123 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listen normally on 4 lo [::1]:123 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listen normally on 5 eth0 [fe80::475:d5ff:fe98:8c89%2]:123 Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: Listening on routing socket on fd #22 for interface updates Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:16:28.865666 ntpd[2061]: 21 Apr 10:16:28 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:16:28.829524 ntpd[2061]: ---------------------------------------------------- Apr 21 10:16:28.829534 ntpd[2061]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:16:28.829543 ntpd[2061]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:16:28.829553 ntpd[2061]: corporation. Support and training for ntp-4 are Apr 21 10:16:28.829563 ntpd[2061]: available at https://www.nwtime.org/support Apr 21 10:16:28.883808 tar[2097]: linux-amd64/LICENSE Apr 21 10:16:28.829573 ntpd[2061]: ---------------------------------------------------- Apr 21 10:16:28.884193 tar[2097]: linux-amd64/helm Apr 21 10:16:28.833138 ntpd[2061]: proto: precision = 0.065 usec (-24) Apr 21 10:16:28.835057 ntpd[2061]: basedate set to 2026-04-09 Apr 21 10:16:28.835076 ntpd[2061]: gps base set to 2026-04-12 (week 2414) Apr 21 10:16:28.839122 ntpd[2061]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:16:28.839171 ntpd[2061]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:16:28.839364 ntpd[2061]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:16:28.839404 ntpd[2061]: Listen normally on 3 eth0 172.31.28.26:123 Apr 21 10:16:28.887656 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:16:28.839444 ntpd[2061]: Listen normally on 4 lo [::1]:123 Apr 21 10:16:28.839513 ntpd[2061]: Listen normally on 5 eth0 [fe80::475:d5ff:fe98:8c89%2]:123 Apr 21 10:16:28.839554 ntpd[2061]: Listening on routing socket on fd #22 for interface updates Apr 21 10:16:28.840894 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:16:28.840926 ntpd[2061]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:16:28.886833 dbus-daemon[2052]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:16:28.898136 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:16:28.898179 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:16:28.915888 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 21 10:16:28.916685 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:16:28.917853 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:16:28.917892 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:16:28.921198 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:16:28.931668 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:16:28.949447 extend-filesystems[2071]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 21 10:16:28.949447 extend-filesystems[2071]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 21 10:16:28.949447 extend-filesystems[2071]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 21 10:16:28.957579 extend-filesystems[2054]: Resized filesystem in /dev/nvme0n1p9 Apr 21 10:16:28.967500 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1670) Apr 21 10:16:28.998431 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:16:28.998867 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:16:29.001323 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 21 10:16:29.021628 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 21 10:16:29.027327 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:16:29.041074 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:16:29.055270 bash[2157]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:16:29.062022 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:16:29.084718 systemd[1]: Starting sshkeys.service... Apr 21 10:16:29.159636 systemd-logind[2075]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:16:29.172499 systemd-logind[2075]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 21 10:16:29.172604 systemd-logind[2075]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:16:29.181986 systemd-logind[2075]: New seat seat0. Apr 21 10:16:29.193106 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:16:29.204748 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:16:29.212269 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:16:29.467784 amazon-ssm-agent[2165]: Initializing new seelog logger Apr 21 10:16:29.471953 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Apr 21 10:16:29.471953 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.471953 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.471953 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 processing appconfig overrides Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 processing appconfig overrides Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 processing appconfig overrides Apr 21 10:16:29.483508 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO Proxy environment variables: Apr 21 10:16:29.498500 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.498500 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:16:29.498500 amazon-ssm-agent[2165]: 2026/04/21 10:16:29 processing appconfig overrides Apr 21 10:16:29.501382 locksmithd[2143]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:16:29.582701 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO https_proxy: Apr 21 10:16:29.595096 coreos-metadata[2195]: Apr 21 10:16:29.587 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:16:29.595096 coreos-metadata[2195]: Apr 21 10:16:29.588 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 21 10:16:29.595096 coreos-metadata[2195]: Apr 21 10:16:29.591 INFO Fetch successful Apr 21 10:16:29.595096 coreos-metadata[2195]: Apr 21 10:16:29.591 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 21 10:16:29.595096 coreos-metadata[2195]: Apr 21 10:16:29.594 INFO Fetch successful Apr 21 10:16:29.597573 unknown[2195]: wrote ssh authorized keys file for user: core Apr 21 10:16:29.681570 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO http_proxy: Apr 21 10:16:29.693251 update-ssh-keys[2267]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:16:29.699751 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:16:29.711084 systemd[1]: Finished sshkeys.service. Apr 21 10:16:29.781401 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO no_proxy: Apr 21 10:16:29.789492 containerd[2109]: time="2026-04-21T10:16:29.788073232Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:16:29.820659 dbus-daemon[2052]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:16:29.820858 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:16:29.830764 dbus-daemon[2052]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2141 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:16:29.840871 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:16:29.864964 polkitd[2289]: Started polkitd version 121 Apr 21 10:16:29.882226 polkitd[2289]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:16:29.883753 polkitd[2289]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:16:29.886526 polkitd[2289]: Finished loading, compiling and executing 2 rules Apr 21 10:16:29.890832 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO Checking if agent identity type OnPrem can be assumed Apr 21 10:16:29.896721 dbus-daemon[2052]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:16:29.897289 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:16:29.897433 polkitd[2289]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:16:29.914729 containerd[2109]: time="2026-04-21T10:16:29.914448504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.920398 containerd[2109]: time="2026-04-21T10:16:29.920009771Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:29.920562 containerd[2109]: time="2026-04-21T10:16:29.920539676Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:16:29.921501 containerd[2109]: time="2026-04-21T10:16:29.920742602Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:16:29.921501 containerd[2109]: time="2026-04-21T10:16:29.921427322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:16:29.921501 containerd[2109]: time="2026-04-21T10:16:29.921456113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.922110 containerd[2109]: time="2026-04-21T10:16:29.921994676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:29.922110 containerd[2109]: time="2026-04-21T10:16:29.922025226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.923331 containerd[2109]: time="2026-04-21T10:16:29.922894192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:29.923331 containerd[2109]: time="2026-04-21T10:16:29.922925649Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.923331 containerd[2109]: time="2026-04-21T10:16:29.922946785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:29.923331 containerd[2109]: time="2026-04-21T10:16:29.922962111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.923331 containerd[2109]: time="2026-04-21T10:16:29.923071314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.923331 containerd[2109]: time="2026-04-21T10:16:29.923296200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:16:29.931564 containerd[2109]: time="2026-04-21T10:16:29.926819860Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:16:29.931564 containerd[2109]: time="2026-04-21T10:16:29.927557475Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:16:29.931564 containerd[2109]: time="2026-04-21T10:16:29.927705653Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:16:29.931564 containerd[2109]: time="2026-04-21T10:16:29.927765681Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.934642560Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.934726890Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.934755962Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.934825313Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.934850121Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935039699Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935536712Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935680240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935705211Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935725245Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935745785Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935764699Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935782122Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.936624 containerd[2109]: time="2026-04-21T10:16:29.935802091Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935824271Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935849145Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935869848Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935888811Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935919360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935940087Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935958565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935979464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.935999597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.936020380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.936038333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.936057606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.936077214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937219 containerd[2109]: time="2026-04-21T10:16:29.936099895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936117356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936138511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936160965Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936191541Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936223002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936242357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936259576Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936315853Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936339507Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936356062Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936374763Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936390423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936408958Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:16:29.937774 containerd[2109]: time="2026-04-21T10:16:29.936424343Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:16:29.938286 containerd[2109]: time="2026-04-21T10:16:29.936439315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:16:29.945882 systemd-resolved[1988]: System hostname changed to 'ip-172-31-28-26'. Apr 21 10:16:29.946022 systemd-hostnamed[2141]: Hostname set to (transient) Apr 21 10:16:29.950598 containerd[2109]: time="2026-04-21T10:16:29.949646363Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:16:29.950598 containerd[2109]: time="2026-04-21T10:16:29.949755048Z" level=info msg="Connect containerd service" Apr 21 10:16:29.950598 containerd[2109]: time="2026-04-21T10:16:29.949814830Z" level=info msg="using legacy CRI server" Apr 21 10:16:29.950598 containerd[2109]: time="2026-04-21T10:16:29.949825499Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:16:29.950598 containerd[2109]: time="2026-04-21T10:16:29.949966807Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:16:29.951279 containerd[2109]: time="2026-04-21T10:16:29.951245633Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:16:29.951794 containerd[2109]: time="2026-04-21T10:16:29.951771068Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:16:29.951942 containerd[2109]: time="2026-04-21T10:16:29.951926450Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:16:29.952101 containerd[2109]: time="2026-04-21T10:16:29.952051586Z" level=info msg="Start subscribing containerd event" Apr 21 10:16:29.952161 containerd[2109]: time="2026-04-21T10:16:29.952127099Z" level=info msg="Start recovering state" Apr 21 10:16:29.952229 containerd[2109]: time="2026-04-21T10:16:29.952214167Z" level=info msg="Start event monitor" Apr 21 10:16:29.952282 containerd[2109]: time="2026-04-21T10:16:29.952240900Z" level=info msg="Start snapshots syncer" Apr 21 10:16:29.952282 containerd[2109]: time="2026-04-21T10:16:29.952255746Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:16:29.952282 containerd[2109]: time="2026-04-21T10:16:29.952267478Z" level=info msg="Start streaming server" Apr 21 10:16:29.952401 containerd[2109]: time="2026-04-21T10:16:29.952346687Z" level=info msg="containerd successfully booted in 0.171939s" Apr 21 10:16:29.953656 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:16:29.989260 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO Checking if agent identity type EC2 can be assumed Apr 21 10:16:30.088566 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO Agent will take identity from EC2 Apr 21 10:16:30.189355 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:16:30.207683 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:16:30.207683 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:16:30.207683 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] Starting Core Agent Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [Registrar] Starting registrar module Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:29 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:30 INFO [EC2Identity] EC2 registration was successful. Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:30 INFO [CredentialRefresher] credentialRefresher has started Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:30 INFO [CredentialRefresher] Starting credentials refresher loop Apr 21 10:16:30.207864 amazon-ssm-agent[2165]: 2026-04-21 10:16:30 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 21 10:16:30.246215 sshd_keygen[2121]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:16:30.287591 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:16:30.288659 amazon-ssm-agent[2165]: 2026-04-21 10:16:30 INFO [CredentialRefresher] Next credential rotation will be in 32.1999934571 minutes Apr 21 10:16:30.303133 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:16:30.320542 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:16:30.320902 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:16:30.330989 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:16:30.348031 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:16:30.358535 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:16:30.371263 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:16:30.373012 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:16:30.520247 tar[2097]: linux-amd64/README.md Apr 21 10:16:30.533048 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:16:31.220048 amazon-ssm-agent[2165]: 2026-04-21 10:16:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 21 10:16:31.321296 amazon-ssm-agent[2165]: 2026-04-21 10:16:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2328) started Apr 21 10:16:31.421720 amazon-ssm-agent[2165]: 2026-04-21 10:16:31 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 21 10:16:31.658020 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:16:31.668494 systemd[1]: Started sshd@0-172.31.28.26:22-50.85.169.122:49592.service - OpenSSH per-connection server daemon (50.85.169.122:49592). Apr 21 10:16:32.690029 sshd[2338]: Accepted publickey for core from 50.85.169.122 port 49592 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:32.693121 sshd[2338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:32.705271 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:16:32.712917 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:16:32.719337 systemd-logind[2075]: New session 1 of user core. Apr 21 10:16:32.737072 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:16:32.746815 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:16:32.752616 (systemd)[2344]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:16:32.916626 systemd[2344]: Queued start job for default target default.target. Apr 21 10:16:32.917134 systemd[2344]: Created slice app.slice - User Application Slice. Apr 21 10:16:32.917164 systemd[2344]: Reached target paths.target - Paths. Apr 21 10:16:32.917186 systemd[2344]: Reached target timers.target - Timers. Apr 21 10:16:32.923626 systemd[2344]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:16:32.924086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:16:32.924697 (kubelet)[2357]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:16:32.928932 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:16:32.935858 systemd[2344]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:16:32.935942 systemd[2344]: Reached target sockets.target - Sockets. Apr 21 10:16:32.935961 systemd[2344]: Reached target basic.target - Basic System. Apr 21 10:16:32.936018 systemd[2344]: Reached target default.target - Main User Target. Apr 21 10:16:32.936054 systemd[2344]: Startup finished in 175ms. Apr 21 10:16:32.937621 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:16:32.946916 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:16:32.949133 systemd[1]: Startup finished in 7.573s (kernel) + 9.333s (userspace) = 16.907s. Apr 21 10:16:33.659822 systemd[1]: Started sshd@1-172.31.28.26:22-50.85.169.122:49608.service - OpenSSH per-connection server daemon (50.85.169.122:49608). Apr 21 10:16:34.533455 kubelet[2357]: E0421 10:16:34.533362 2357 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:16:34.536631 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:16:34.536950 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:16:34.675095 sshd[2373]: Accepted publickey for core from 50.85.169.122 port 49608 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:34.676654 sshd[2373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:34.682074 systemd-logind[2075]: New session 2 of user core. Apr 21 10:16:34.689074 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:16:35.386184 sshd[2373]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:35.390793 systemd-logind[2075]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:16:35.392443 systemd[1]: sshd@1-172.31.28.26:22-50.85.169.122:49608.service: Deactivated successfully. Apr 21 10:16:35.396280 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:16:35.397227 systemd-logind[2075]: Removed session 2. Apr 21 10:16:35.560833 systemd[1]: Started sshd@2-172.31.28.26:22-50.85.169.122:49618.service - OpenSSH per-connection server daemon (50.85.169.122:49618). Apr 21 10:16:36.865268 systemd-resolved[1988]: Clock change detected. Flushing caches. Apr 21 10:16:37.608449 sshd[2384]: Accepted publickey for core from 50.85.169.122 port 49618 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:37.610099 sshd[2384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:37.615425 systemd-logind[2075]: New session 3 of user core. Apr 21 10:16:37.621446 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:16:38.312748 sshd[2384]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:38.317267 systemd[1]: sshd@2-172.31.28.26:22-50.85.169.122:49618.service: Deactivated successfully. Apr 21 10:16:38.321343 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:16:38.322227 systemd-logind[2075]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:16:38.324171 systemd-logind[2075]: Removed session 3. Apr 21 10:16:38.491433 systemd[1]: Started sshd@3-172.31.28.26:22-50.85.169.122:49628.service - OpenSSH per-connection server daemon (50.85.169.122:49628). Apr 21 10:16:39.504433 sshd[2392]: Accepted publickey for core from 50.85.169.122 port 49628 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:39.506016 sshd[2392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:39.511369 systemd-logind[2075]: New session 4 of user core. Apr 21 10:16:39.520478 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:16:40.212749 sshd[2392]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:40.216267 systemd[1]: sshd@3-172.31.28.26:22-50.85.169.122:49628.service: Deactivated successfully. Apr 21 10:16:40.221292 systemd-logind[2075]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:16:40.222221 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:16:40.223589 systemd-logind[2075]: Removed session 4. Apr 21 10:16:40.373386 systemd[1]: Started sshd@4-172.31.28.26:22-50.85.169.122:42110.service - OpenSSH per-connection server daemon (50.85.169.122:42110). Apr 21 10:16:41.353790 sshd[2400]: Accepted publickey for core from 50.85.169.122 port 42110 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:41.354463 sshd[2400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:41.359769 systemd-logind[2075]: New session 5 of user core. Apr 21 10:16:41.365474 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:16:41.912488 sudo[2404]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:16:41.912974 sudo[2404]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:16:41.929892 sudo[2404]: pam_unix(sudo:session): session closed for user root Apr 21 10:16:42.090713 sshd[2400]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:42.094776 systemd[1]: sshd@4-172.31.28.26:22-50.85.169.122:42110.service: Deactivated successfully. Apr 21 10:16:42.099765 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:16:42.100421 systemd-logind[2075]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:16:42.102249 systemd-logind[2075]: Removed session 5. Apr 21 10:16:42.264465 systemd[1]: Started sshd@5-172.31.28.26:22-50.85.169.122:42118.service - OpenSSH per-connection server daemon (50.85.169.122:42118). Apr 21 10:16:43.245509 sshd[2409]: Accepted publickey for core from 50.85.169.122 port 42118 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:43.247202 sshd[2409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:43.252668 systemd-logind[2075]: New session 6 of user core. Apr 21 10:16:43.258476 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:16:43.772356 sudo[2414]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:16:43.772747 sudo[2414]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:16:43.776679 sudo[2414]: pam_unix(sudo:session): session closed for user root Apr 21 10:16:43.782426 sudo[2413]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:16:43.782819 sudo[2413]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:16:43.802527 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:16:43.804776 auditctl[2417]: No rules Apr 21 10:16:43.805627 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:16:43.805981 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:16:43.813970 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:16:43.851474 augenrules[2436]: No rules Apr 21 10:16:43.853347 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:16:43.856748 sudo[2413]: pam_unix(sudo:session): session closed for user root Apr 21 10:16:44.018790 sshd[2409]: pam_unix(sshd:session): session closed for user core Apr 21 10:16:44.024640 systemd[1]: sshd@5-172.31.28.26:22-50.85.169.122:42118.service: Deactivated successfully. Apr 21 10:16:44.026070 systemd-logind[2075]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:16:44.029264 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:16:44.030526 systemd-logind[2075]: Removed session 6. Apr 21 10:16:44.193449 systemd[1]: Started sshd@6-172.31.28.26:22-50.85.169.122:42132.service - OpenSSH per-connection server daemon (50.85.169.122:42132). Apr 21 10:16:45.173806 sshd[2445]: Accepted publickey for core from 50.85.169.122 port 42132 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:16:45.175361 sshd[2445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:16:45.181205 systemd-logind[2075]: New session 7 of user core. Apr 21 10:16:45.187483 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:16:45.699973 sudo[2449]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:16:45.700393 sudo[2449]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:16:45.702659 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:16:45.708476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:16:46.037877 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:16:46.045386 (kubelet)[2471]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:16:46.128245 kubelet[2471]: E0421 10:16:46.128194 2471 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:16:46.132390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:16:46.132650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:16:46.508657 (dockerd)[2484]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:16:46.508905 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:16:47.299865 dockerd[2484]: time="2026-04-21T10:16:47.299799495Z" level=info msg="Starting up" Apr 21 10:16:48.896344 dockerd[2484]: time="2026-04-21T10:16:48.896290913Z" level=info msg="Loading containers: start." Apr 21 10:16:49.075054 kernel: Initializing XFRM netlink socket Apr 21 10:16:49.129617 (udev-worker)[2506]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:16:49.244506 systemd-networkd[1659]: docker0: Link UP Apr 21 10:16:49.282088 dockerd[2484]: time="2026-04-21T10:16:49.282034878Z" level=info msg="Loading containers: done." Apr 21 10:16:49.320565 dockerd[2484]: time="2026-04-21T10:16:49.320505433Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:16:49.320987 dockerd[2484]: time="2026-04-21T10:16:49.320636905Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:16:49.320987 dockerd[2484]: time="2026-04-21T10:16:49.320932792Z" level=info msg="Daemon has completed initialization" Apr 21 10:16:49.361891 dockerd[2484]: time="2026-04-21T10:16:49.361411439Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:16:49.361897 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:16:50.585654 containerd[2109]: time="2026-04-21T10:16:50.585608622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:16:51.163224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount8264499.mount: Deactivated successfully. Apr 21 10:16:52.697757 containerd[2109]: time="2026-04-21T10:16:52.697700891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:52.699190 containerd[2109]: time="2026-04-21T10:16:52.699051504Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 21 10:16:52.700703 containerd[2109]: time="2026-04-21T10:16:52.700319677Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:52.705332 containerd[2109]: time="2026-04-21T10:16:52.705281101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:52.706636 containerd[2109]: time="2026-04-21T10:16:52.706594438Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.120945307s" Apr 21 10:16:52.706795 containerd[2109]: time="2026-04-21T10:16:52.706774625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:16:52.707474 containerd[2109]: time="2026-04-21T10:16:52.707435577Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:16:54.454770 containerd[2109]: time="2026-04-21T10:16:54.454703387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:54.456271 containerd[2109]: time="2026-04-21T10:16:54.456048701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 21 10:16:54.458048 containerd[2109]: time="2026-04-21T10:16:54.457533239Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:54.461049 containerd[2109]: time="2026-04-21T10:16:54.460992544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:54.462386 containerd[2109]: time="2026-04-21T10:16:54.462347198Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.754746315s" Apr 21 10:16:54.462495 containerd[2109]: time="2026-04-21T10:16:54.462392709Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:16:54.462972 containerd[2109]: time="2026-04-21T10:16:54.462907567Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:16:55.828143 containerd[2109]: time="2026-04-21T10:16:55.828089848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:55.833050 containerd[2109]: time="2026-04-21T10:16:55.830953292Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 21 10:16:55.833050 containerd[2109]: time="2026-04-21T10:16:55.831126438Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:55.838246 containerd[2109]: time="2026-04-21T10:16:55.838196822Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:55.839376 containerd[2109]: time="2026-04-21T10:16:55.839332044Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.376235635s" Apr 21 10:16:55.839604 containerd[2109]: time="2026-04-21T10:16:55.839381244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:16:55.839885 containerd[2109]: time="2026-04-21T10:16:55.839845249Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:16:56.296172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:16:56.306361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:16:56.609654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:16:56.613688 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:16:56.690052 kubelet[2704]: E0421 10:16:56.689168 2704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:16:56.692441 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:16:56.692696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:16:57.051563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount579823439.mount: Deactivated successfully. Apr 21 10:16:57.659609 containerd[2109]: time="2026-04-21T10:16:57.659538361Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:57.660949 containerd[2109]: time="2026-04-21T10:16:57.660691985Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 21 10:16:57.662684 containerd[2109]: time="2026-04-21T10:16:57.662436964Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:57.672132 containerd[2109]: time="2026-04-21T10:16:57.672034203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:57.673256 containerd[2109]: time="2026-04-21T10:16:57.673047474Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.833141284s" Apr 21 10:16:57.673256 containerd[2109]: time="2026-04-21T10:16:57.673093374Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:16:57.674090 containerd[2109]: time="2026-04-21T10:16:57.673817471Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:16:58.225862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3417943540.mount: Deactivated successfully. Apr 21 10:16:59.490863 containerd[2109]: time="2026-04-21T10:16:59.490796589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:59.492457 containerd[2109]: time="2026-04-21T10:16:59.492234715Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 21 10:16:59.494237 containerd[2109]: time="2026-04-21T10:16:59.494199931Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:59.497972 containerd[2109]: time="2026-04-21T10:16:59.497588561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:16:59.498808 containerd[2109]: time="2026-04-21T10:16:59.498769313Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.824917147s" Apr 21 10:16:59.498896 containerd[2109]: time="2026-04-21T10:16:59.498819483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:16:59.499392 containerd[2109]: time="2026-04-21T10:16:59.499361210Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:17:00.042996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2211071214.mount: Deactivated successfully. Apr 21 10:17:00.050335 containerd[2109]: time="2026-04-21T10:17:00.050278979Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:00.051772 containerd[2109]: time="2026-04-21T10:17:00.051611510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 21 10:17:00.060457 containerd[2109]: time="2026-04-21T10:17:00.060377183Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:00.077063 containerd[2109]: time="2026-04-21T10:17:00.076387454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:00.082812 containerd[2109]: time="2026-04-21T10:17:00.078012255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 578.611444ms" Apr 21 10:17:00.082812 containerd[2109]: time="2026-04-21T10:17:00.080999067Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:17:00.086347 containerd[2109]: time="2026-04-21T10:17:00.086298492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:17:00.682686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1909542587.mount: Deactivated successfully. Apr 21 10:17:01.016244 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:17:03.161278 containerd[2109]: time="2026-04-21T10:17:03.161217311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:03.163050 containerd[2109]: time="2026-04-21T10:17:03.162904512Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 21 10:17:03.165276 containerd[2109]: time="2026-04-21T10:17:03.165191877Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:03.175059 containerd[2109]: time="2026-04-21T10:17:03.173880432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:03.176721 containerd[2109]: time="2026-04-21T10:17:03.176550762Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.090037631s" Apr 21 10:17:03.176721 containerd[2109]: time="2026-04-21T10:17:03.176606173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:17:06.658480 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:06.665372 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:06.713938 systemd[1]: Reloading requested from client PID 2868 ('systemctl') (unit session-7.scope)... Apr 21 10:17:06.713963 systemd[1]: Reloading... Apr 21 10:17:06.824048 zram_generator::config[2904]: No configuration found. Apr 21 10:17:07.002560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:07.093288 systemd[1]: Reloading finished in 378 ms. Apr 21 10:17:07.146174 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:17:07.146511 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:17:07.147352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:07.156339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:07.366395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:07.373609 (kubelet)[2983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:17:07.430358 kubelet[2983]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:07.430358 kubelet[2983]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:17:07.430358 kubelet[2983]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:07.434051 kubelet[2983]: I0421 10:17:07.433180 2983 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:17:07.895092 kubelet[2983]: I0421 10:17:07.895009 2983 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:17:07.895092 kubelet[2983]: I0421 10:17:07.895085 2983 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:17:07.895446 kubelet[2983]: I0421 10:17:07.895424 2983 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:17:07.958145 kubelet[2983]: I0421 10:17:07.958101 2983 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:17:07.964975 kubelet[2983]: E0421 10:17:07.964729 2983 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:17:07.974691 kubelet[2983]: E0421 10:17:07.974643 2983 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:17:07.974691 kubelet[2983]: I0421 10:17:07.974687 2983 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:17:07.984824 kubelet[2983]: I0421 10:17:07.984719 2983 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:17:07.987620 kubelet[2983]: I0421 10:17:07.987564 2983 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:17:07.991454 kubelet[2983]: I0421 10:17:07.987616 2983 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:17:07.991454 kubelet[2983]: I0421 10:17:07.991459 2983 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:17:07.991685 kubelet[2983]: I0421 10:17:07.991477 2983 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:17:07.993208 kubelet[2983]: I0421 10:17:07.993171 2983 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:08.000946 kubelet[2983]: I0421 10:17:08.000897 2983 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:17:08.000946 kubelet[2983]: I0421 10:17:08.000961 2983 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:17:08.001149 kubelet[2983]: I0421 10:17:08.000999 2983 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:17:08.007547 kubelet[2983]: I0421 10:17:08.007245 2983 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:17:08.014999 kubelet[2983]: E0421 10:17:08.014736 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-26&limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:08.014999 kubelet[2983]: E0421 10:17:08.014905 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:08.023047 kubelet[2983]: I0421 10:17:08.021642 2983 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:17:08.023047 kubelet[2983]: I0421 10:17:08.022454 2983 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:17:08.023601 kubelet[2983]: W0421 10:17:08.023579 2983 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:17:08.032545 kubelet[2983]: I0421 10:17:08.032514 2983 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:17:08.032887 kubelet[2983]: I0421 10:17:08.032726 2983 server.go:1289] "Started kubelet" Apr 21 10:17:08.034516 kubelet[2983]: I0421 10:17:08.034490 2983 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:17:08.041761 kubelet[2983]: E0421 10:17:08.038454 2983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.26:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-26.18a857d5a0a818a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-26,UID:ip-172-31-28-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-26,},FirstTimestamp:2026-04-21 10:17:08.032682148 +0000 UTC m=+0.653712968,LastTimestamp:2026-04-21 10:17:08.032682148 +0000 UTC m=+0.653712968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-26,}" Apr 21 10:17:08.042685 kubelet[2983]: I0421 10:17:08.042085 2983 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:17:08.043927 kubelet[2983]: I0421 10:17:08.043901 2983 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:17:08.044931 kubelet[2983]: I0421 10:17:08.044913 2983 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:17:08.046306 kubelet[2983]: E0421 10:17:08.045411 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:08.046306 kubelet[2983]: I0421 10:17:08.045756 2983 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:17:08.046306 kubelet[2983]: I0421 10:17:08.045810 2983 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:17:08.048596 kubelet[2983]: I0421 10:17:08.048533 2983 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:17:08.048929 kubelet[2983]: I0421 10:17:08.048908 2983 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:17:08.049210 kubelet[2983]: I0421 10:17:08.049189 2983 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:17:08.052288 kubelet[2983]: E0421 10:17:08.052250 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:08.052398 kubelet[2983]: E0421 10:17:08.052351 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-26?timeout=10s\": dial tcp 172.31.28.26:6443: connect: connection refused" interval="200ms" Apr 21 10:17:08.053911 kubelet[2983]: I0421 10:17:08.053892 2983 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:17:08.054144 kubelet[2983]: I0421 10:17:08.054122 2983 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:17:08.056129 kubelet[2983]: I0421 10:17:08.056112 2983 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:17:08.072776 kubelet[2983]: I0421 10:17:08.072710 2983 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:17:08.075356 kubelet[2983]: I0421 10:17:08.075316 2983 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:17:08.075356 kubelet[2983]: I0421 10:17:08.075345 2983 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:17:08.075506 kubelet[2983]: I0421 10:17:08.075371 2983 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:17:08.075506 kubelet[2983]: I0421 10:17:08.075380 2983 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:17:08.075506 kubelet[2983]: E0421 10:17:08.075429 2983 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:17:08.085686 kubelet[2983]: E0421 10:17:08.085640 2983 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:17:08.086257 kubelet[2983]: E0421 10:17:08.086079 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:08.102441 kubelet[2983]: I0421 10:17:08.102417 2983 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:17:08.102646 kubelet[2983]: I0421 10:17:08.102580 2983 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:17:08.102738 kubelet[2983]: I0421 10:17:08.102645 2983 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:08.105511 kubelet[2983]: I0421 10:17:08.105270 2983 policy_none.go:49] "None policy: Start" Apr 21 10:17:08.105511 kubelet[2983]: I0421 10:17:08.105294 2983 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:17:08.105511 kubelet[2983]: I0421 10:17:08.105318 2983 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:17:08.110988 kubelet[2983]: E0421 10:17:08.110955 2983 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:17:08.111202 kubelet[2983]: I0421 10:17:08.111181 2983 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:17:08.111259 kubelet[2983]: I0421 10:17:08.111203 2983 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:17:08.112578 kubelet[2983]: I0421 10:17:08.112549 2983 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:17:08.117050 kubelet[2983]: E0421 10:17:08.116434 2983 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:17:08.117050 kubelet[2983]: E0421 10:17:08.116477 2983 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-26\" not found" Apr 21 10:17:08.185613 kubelet[2983]: E0421 10:17:08.185498 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:08.192867 kubelet[2983]: E0421 10:17:08.192831 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:08.194789 kubelet[2983]: E0421 10:17:08.194757 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:08.213461 kubelet[2983]: I0421 10:17:08.213427 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:08.213804 kubelet[2983]: E0421 10:17:08.213774 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.26:6443/api/v1/nodes\": dial tcp 172.31.28.26:6443: connect: connection refused" node="ip-172-31-28-26" Apr 21 10:17:08.247402 kubelet[2983]: I0421 10:17:08.247227 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:08.247402 kubelet[2983]: I0421 10:17:08.247281 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/431d5a200bf477df85cbb4546ba5b32b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-26\" (UID: \"431d5a200bf477df85cbb4546ba5b32b\") " pod="kube-system/kube-scheduler-ip-172-31-28-26" Apr 21 10:17:08.247402 kubelet[2983]: I0421 10:17:08.247321 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4efb89d8de1589573e7db4ffeb6bfbb9-ca-certs\") pod \"kube-apiserver-ip-172-31-28-26\" (UID: \"4efb89d8de1589573e7db4ffeb6bfbb9\") " pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:08.247402 kubelet[2983]: I0421 10:17:08.247364 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:08.247402 kubelet[2983]: I0421 10:17:08.247395 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:08.247688 kubelet[2983]: I0421 10:17:08.247423 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4efb89d8de1589573e7db4ffeb6bfbb9-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-26\" (UID: \"4efb89d8de1589573e7db4ffeb6bfbb9\") " pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:08.247688 kubelet[2983]: I0421 10:17:08.247449 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4efb89d8de1589573e7db4ffeb6bfbb9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-26\" (UID: \"4efb89d8de1589573e7db4ffeb6bfbb9\") " pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:08.247688 kubelet[2983]: I0421 10:17:08.247475 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:08.247688 kubelet[2983]: I0421 10:17:08.247497 2983 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:08.253863 kubelet[2983]: E0421 10:17:08.253801 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-26?timeout=10s\": dial tcp 172.31.28.26:6443: connect: connection refused" interval="400ms" Apr 21 10:17:08.416110 kubelet[2983]: I0421 10:17:08.416080 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:08.416694 kubelet[2983]: E0421 10:17:08.416510 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.26:6443/api/v1/nodes\": dial tcp 172.31.28.26:6443: connect: connection refused" node="ip-172-31-28-26" Apr 21 10:17:08.487110 containerd[2109]: time="2026-04-21T10:17:08.486969478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-26,Uid:4efb89d8de1589573e7db4ffeb6bfbb9,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:08.497880 containerd[2109]: time="2026-04-21T10:17:08.497838303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-26,Uid:431d5a200bf477df85cbb4546ba5b32b,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:08.498342 containerd[2109]: time="2026-04-21T10:17:08.497838306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-26,Uid:a690e564996de536f0dd45135e2ee1c8,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:08.655249 kubelet[2983]: E0421 10:17:08.655206 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-26?timeout=10s\": dial tcp 172.31.28.26:6443: connect: connection refused" interval="800ms" Apr 21 10:17:08.818818 kubelet[2983]: I0421 10:17:08.818568 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:08.818941 kubelet[2983]: E0421 10:17:08.818878 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.26:6443/api/v1/nodes\": dial tcp 172.31.28.26:6443: connect: connection refused" node="ip-172-31-28-26" Apr 21 10:17:09.018619 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656002855.mount: Deactivated successfully. Apr 21 10:17:09.029114 containerd[2109]: time="2026-04-21T10:17:09.028850177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:09.030149 containerd[2109]: time="2026-04-21T10:17:09.030082858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 21 10:17:09.031312 containerd[2109]: time="2026-04-21T10:17:09.031274774Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:09.033055 containerd[2109]: time="2026-04-21T10:17:09.033004287Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:09.034277 containerd[2109]: time="2026-04-21T10:17:09.034242980Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:09.035972 containerd[2109]: time="2026-04-21T10:17:09.035840887Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:17:09.036182 containerd[2109]: time="2026-04-21T10:17:09.036146118Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:17:09.037134 containerd[2109]: time="2026-04-21T10:17:09.037104051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:17:09.039195 containerd[2109]: time="2026-04-21T10:17:09.039069012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.822864ms" Apr 21 10:17:09.041498 containerd[2109]: time="2026-04-21T10:17:09.041399697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.301368ms" Apr 21 10:17:09.046227 containerd[2109]: time="2026-04-21T10:17:09.046121564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.036219ms" Apr 21 10:17:09.280280 kubelet[2983]: E0421 10:17:09.279229 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:09.380477 kubelet[2983]: E0421 10:17:09.380437 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:09.414471 kubelet[2983]: E0421 10:17:09.414411 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:09.456104 kubelet[2983]: E0421 10:17:09.456049 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-26?timeout=10s\": dial tcp 172.31.28.26:6443: connect: connection refused" interval="1.6s" Apr 21 10:17:09.458611 containerd[2109]: time="2026-04-21T10:17:09.458208560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:09.458611 containerd[2109]: time="2026-04-21T10:17:09.458316207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:09.458611 containerd[2109]: time="2026-04-21T10:17:09.458374084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:09.458611 containerd[2109]: time="2026-04-21T10:17:09.458489341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:09.469349 containerd[2109]: time="2026-04-21T10:17:09.469212112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:09.470304 containerd[2109]: time="2026-04-21T10:17:09.470235036Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:09.470459 containerd[2109]: time="2026-04-21T10:17:09.470337888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:09.470621 containerd[2109]: time="2026-04-21T10:17:09.470512423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:09.472125 containerd[2109]: time="2026-04-21T10:17:09.471807555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:09.472125 containerd[2109]: time="2026-04-21T10:17:09.471862396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:09.472125 containerd[2109]: time="2026-04-21T10:17:09.471878376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:09.472125 containerd[2109]: time="2026-04-21T10:17:09.471986749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:09.603807 containerd[2109]: time="2026-04-21T10:17:09.603767162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-26,Uid:4efb89d8de1589573e7db4ffeb6bfbb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f0eef0aea50f4659a7213af5697512ab77ad9032dcd75c296581589043dab19\"" Apr 21 10:17:09.615272 containerd[2109]: time="2026-04-21T10:17:09.613091212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-26,Uid:a690e564996de536f0dd45135e2ee1c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d772af21f4eac206c5bd3aefc64e271c9122c171dfc541083643bdaf5af1bb90\"" Apr 21 10:17:09.619638 containerd[2109]: time="2026-04-21T10:17:09.619578493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-26,Uid:431d5a200bf477df85cbb4546ba5b32b,Namespace:kube-system,Attempt:0,} returns sandbox id \"759fe3f7b3a4c589fa2557bcaa8490a478a6483c26ee2105acbd2b52e98b621e\"" Apr 21 10:17:09.621607 kubelet[2983]: E0421 10:17:09.621563 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-26&limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:09.622368 kubelet[2983]: I0421 10:17:09.622343 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:09.622687 kubelet[2983]: E0421 10:17:09.622659 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.26:6443/api/v1/nodes\": dial tcp 172.31.28.26:6443: connect: connection refused" node="ip-172-31-28-26" Apr 21 10:17:09.632927 containerd[2109]: time="2026-04-21T10:17:09.632725314Z" level=info msg="CreateContainer within sandbox \"1f0eef0aea50f4659a7213af5697512ab77ad9032dcd75c296581589043dab19\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:17:09.635017 containerd[2109]: time="2026-04-21T10:17:09.634977209Z" level=info msg="CreateContainer within sandbox \"759fe3f7b3a4c589fa2557bcaa8490a478a6483c26ee2105acbd2b52e98b621e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:17:09.637274 containerd[2109]: time="2026-04-21T10:17:09.637230785Z" level=info msg="CreateContainer within sandbox \"d772af21f4eac206c5bd3aefc64e271c9122c171dfc541083643bdaf5af1bb90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:17:09.662500 containerd[2109]: time="2026-04-21T10:17:09.662459460Z" level=info msg="CreateContainer within sandbox \"759fe3f7b3a4c589fa2557bcaa8490a478a6483c26ee2105acbd2b52e98b621e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17e74d5cc959625aaa3c057f50357cdba07e8790e52b85c7ea24d92ea5109935\"" Apr 21 10:17:09.664259 containerd[2109]: time="2026-04-21T10:17:09.663444585Z" level=info msg="StartContainer for \"17e74d5cc959625aaa3c057f50357cdba07e8790e52b85c7ea24d92ea5109935\"" Apr 21 10:17:09.670167 containerd[2109]: time="2026-04-21T10:17:09.670116262Z" level=info msg="CreateContainer within sandbox \"d772af21f4eac206c5bd3aefc64e271c9122c171dfc541083643bdaf5af1bb90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4\"" Apr 21 10:17:09.670645 containerd[2109]: time="2026-04-21T10:17:09.670611414Z" level=info msg="CreateContainer within sandbox \"1f0eef0aea50f4659a7213af5697512ab77ad9032dcd75c296581589043dab19\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e84922ebef2c594260d5dee470a5277a76bba131e2b0b475774a6354c23f98c5\"" Apr 21 10:17:09.672286 containerd[2109]: time="2026-04-21T10:17:09.672255967Z" level=info msg="StartContainer for \"b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4\"" Apr 21 10:17:09.678460 containerd[2109]: time="2026-04-21T10:17:09.678418079Z" level=info msg="StartContainer for \"e84922ebef2c594260d5dee470a5277a76bba131e2b0b475774a6354c23f98c5\"" Apr 21 10:17:09.837648 containerd[2109]: time="2026-04-21T10:17:09.837397634Z" level=info msg="StartContainer for \"b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4\" returns successfully" Apr 21 10:17:09.842653 containerd[2109]: time="2026-04-21T10:17:09.842602294Z" level=info msg="StartContainer for \"17e74d5cc959625aaa3c057f50357cdba07e8790e52b85c7ea24d92ea5109935\" returns successfully" Apr 21 10:17:09.853962 containerd[2109]: time="2026-04-21T10:17:09.852951414Z" level=info msg="StartContainer for \"e84922ebef2c594260d5dee470a5277a76bba131e2b0b475774a6354c23f98c5\" returns successfully" Apr 21 10:17:09.997101 kubelet[2983]: E0421 10:17:09.994857 2983 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.26:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:17:10.126618 kubelet[2983]: E0421 10:17:10.126509 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:10.128693 kubelet[2983]: E0421 10:17:10.126755 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:10.132077 kubelet[2983]: E0421 10:17:10.130665 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:11.057247 kubelet[2983]: E0421 10:17:11.057199 2983 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.26:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-26?timeout=10s\": dial tcp 172.31.28.26:6443: connect: connection refused" interval="3.2s" Apr 21 10:17:11.131039 kubelet[2983]: E0421 10:17:11.130993 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:11.131485 kubelet[2983]: E0421 10:17:11.131361 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:11.213301 kubelet[2983]: E0421 10:17:11.213232 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.26:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:17:11.225258 kubelet[2983]: I0421 10:17:11.225226 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:11.225628 kubelet[2983]: E0421 10:17:11.225587 2983 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.26:6443/api/v1/nodes\": dial tcp 172.31.28.26:6443: connect: connection refused" node="ip-172-31-28-26" Apr 21 10:17:11.293620 kubelet[2983]: E0421 10:17:11.293569 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.26:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:17:11.315291 kubelet[2983]: E0421 10:17:11.315159 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.26:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-26&limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:17:11.446580 kubelet[2983]: E0421 10:17:11.446464 2983 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.26:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.26:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-26.18a857d5a0a818a4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-26,UID:ip-172-31-28-26,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-26,},FirstTimestamp:2026-04-21 10:17:08.032682148 +0000 UTC m=+0.653712968,LastTimestamp:2026-04-21 10:17:08.032682148 +0000 UTC m=+0.653712968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-26,}" Apr 21 10:17:11.602641 kubelet[2983]: E0421 10:17:11.602531 2983 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.26:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.26:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:17:12.496312 kubelet[2983]: E0421 10:17:12.496276 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:13.889313 kubelet[2983]: E0421 10:17:13.889274 2983 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-28-26" not found Apr 21 10:17:14.100064 kubelet[2983]: E0421 10:17:14.099428 2983 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:14.252143 kubelet[2983]: E0421 10:17:14.252036 2983 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ip-172-31-28-26" not found Apr 21 10:17:14.262457 kubelet[2983]: E0421 10:17:14.262359 2983 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-26\" not found" node="ip-172-31-28-26" Apr 21 10:17:14.430766 kubelet[2983]: I0421 10:17:14.430730 2983 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:14.451131 kubelet[2983]: I0421 10:17:14.450585 2983 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-26" Apr 21 10:17:14.451131 kubelet[2983]: E0421 10:17:14.450635 2983 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-26\": node \"ip-172-31-28-26\" not found" Apr 21 10:17:14.463085 kubelet[2983]: E0421 10:17:14.463050 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:14.563416 kubelet[2983]: E0421 10:17:14.563354 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:14.665161 kubelet[2983]: E0421 10:17:14.665112 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:14.766082 kubelet[2983]: E0421 10:17:14.766011 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:14.867498 kubelet[2983]: E0421 10:17:14.867069 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:14.968242 kubelet[2983]: E0421 10:17:14.968196 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.068870 kubelet[2983]: E0421 10:17:15.068795 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.077240 update_engine[2080]: I20260421 10:17:15.077139 2080 update_attempter.cc:509] Updating boot flags... Apr 21 10:17:15.149156 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3277) Apr 21 10:17:15.170057 kubelet[2983]: E0421 10:17:15.169317 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.271107 kubelet[2983]: E0421 10:17:15.269910 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.317152 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3281) Apr 21 10:17:15.371537 kubelet[2983]: E0421 10:17:15.371460 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.472388 kubelet[2983]: E0421 10:17:15.472212 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.572550 kubelet[2983]: E0421 10:17:15.572503 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.673468 kubelet[2983]: E0421 10:17:15.673415 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.774232 kubelet[2983]: E0421 10:17:15.774091 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.852727 systemd[1]: Reloading requested from client PID 3446 ('systemctl') (unit session-7.scope)... Apr 21 10:17:15.852810 systemd[1]: Reloading... Apr 21 10:17:15.875047 kubelet[2983]: E0421 10:17:15.874931 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:15.960090 zram_generator::config[3492]: No configuration found. Apr 21 10:17:15.975323 kubelet[2983]: E0421 10:17:15.975283 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:16.076409 kubelet[2983]: E0421 10:17:16.076363 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:16.088232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:17:16.176856 kubelet[2983]: E0421 10:17:16.176804 2983 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:16.181767 systemd[1]: Reloading finished in 328 ms. Apr 21 10:17:16.220219 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:16.234700 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:17:16.235357 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:16.243852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:17:16.454295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:17:16.455901 (kubelet)[3556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:17:16.546431 kubelet[3556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:16.546431 kubelet[3556]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:17:16.546431 kubelet[3556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:17:16.546938 kubelet[3556]: I0421 10:17:16.546459 3556 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:17:16.554733 kubelet[3556]: I0421 10:17:16.554693 3556 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:17:16.554733 kubelet[3556]: I0421 10:17:16.554722 3556 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:17:16.555082 kubelet[3556]: I0421 10:17:16.555061 3556 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:17:16.556481 kubelet[3556]: I0421 10:17:16.556433 3556 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:17:16.565050 kubelet[3556]: I0421 10:17:16.564351 3556 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:17:16.571009 kubelet[3556]: E0421 10:17:16.570962 3556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:17:16.571009 kubelet[3556]: I0421 10:17:16.570997 3556 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:17:16.574305 kubelet[3556]: I0421 10:17:16.574279 3556 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:17:16.576256 kubelet[3556]: I0421 10:17:16.575865 3556 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:17:16.576256 kubelet[3556]: I0421 10:17:16.575908 3556 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-26","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 21 10:17:16.576256 kubelet[3556]: I0421 10:17:16.576102 3556 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:17:16.576256 kubelet[3556]: I0421 10:17:16.576111 3556 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:17:16.576256 kubelet[3556]: I0421 10:17:16.576163 3556 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:16.577414 kubelet[3556]: I0421 10:17:16.576360 3556 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:17:16.577414 kubelet[3556]: I0421 10:17:16.576375 3556 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:17:16.577414 kubelet[3556]: I0421 10:17:16.576405 3556 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:17:16.577414 kubelet[3556]: I0421 10:17:16.576449 3556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:17:16.588535 kubelet[3556]: I0421 10:17:16.584556 3556 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:17:16.588535 kubelet[3556]: I0421 10:17:16.585695 3556 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:17:16.590565 kubelet[3556]: I0421 10:17:16.590535 3556 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:17:16.590666 kubelet[3556]: I0421 10:17:16.590610 3556 server.go:1289] "Started kubelet" Apr 21 10:17:16.592741 kubelet[3556]: I0421 10:17:16.592709 3556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:17:16.613226 kubelet[3556]: I0421 10:17:16.613186 3556 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:17:16.614962 kubelet[3556]: I0421 10:17:16.614938 3556 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:17:16.625127 kubelet[3556]: I0421 10:17:16.624976 3556 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:17:16.626868 kubelet[3556]: I0421 10:17:16.626701 3556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:17:16.629382 kubelet[3556]: I0421 10:17:16.628244 3556 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:17:16.629382 kubelet[3556]: I0421 10:17:16.628914 3556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:17:16.634537 kubelet[3556]: I0421 10:17:16.634484 3556 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:17:16.634858 kubelet[3556]: E0421 10:17:16.634815 3556 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-26\" not found" Apr 21 10:17:16.638119 kubelet[3556]: I0421 10:17:16.637988 3556 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:17:16.638233 kubelet[3556]: I0421 10:17:16.638160 3556 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:17:16.657911 kubelet[3556]: I0421 10:17:16.657860 3556 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:17:16.657911 kubelet[3556]: I0421 10:17:16.657892 3556 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:17:16.658129 kubelet[3556]: I0421 10:17:16.657935 3556 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:17:16.658129 kubelet[3556]: I0421 10:17:16.657945 3556 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:17:16.658129 kubelet[3556]: E0421 10:17:16.657994 3556 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:17:16.664557 kubelet[3556]: I0421 10:17:16.663828 3556 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:17:16.664557 kubelet[3556]: I0421 10:17:16.663947 3556 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:17:16.666553 kubelet[3556]: I0421 10:17:16.666531 3556 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:17:16.760499 kubelet[3556]: E0421 10:17:16.759312 3556 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.785715 3556 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.785750 3556 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.785774 3556 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.786355 3556 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.786384 3556 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.786414 3556 policy_none.go:49] "None policy: Start" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.786428 3556 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.786445 3556 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:17:16.787077 kubelet[3556]: I0421 10:17:16.786579 3556 state_mem.go:75] "Updated machine memory state" Apr 21 10:17:16.790047 kubelet[3556]: E0421 10:17:16.789343 3556 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:17:16.790047 kubelet[3556]: I0421 10:17:16.789556 3556 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:17:16.790047 kubelet[3556]: I0421 10:17:16.789572 3556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:17:16.791621 kubelet[3556]: I0421 10:17:16.791199 3556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:17:16.794656 kubelet[3556]: E0421 10:17:16.794626 3556 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:17:16.900218 kubelet[3556]: I0421 10:17:16.900177 3556 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-26" Apr 21 10:17:16.909375 kubelet[3556]: I0421 10:17:16.909338 3556 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-26" Apr 21 10:17:16.909527 kubelet[3556]: I0421 10:17:16.909440 3556 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-26" Apr 21 10:17:16.960328 kubelet[3556]: I0421 10:17:16.960148 3556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-26" Apr 21 10:17:16.960328 kubelet[3556]: I0421 10:17:16.960222 3556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:16.967471 kubelet[3556]: I0421 10:17:16.967195 3556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:17.040114 kubelet[3556]: I0421 10:17:17.039984 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4efb89d8de1589573e7db4ffeb6bfbb9-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-26\" (UID: \"4efb89d8de1589573e7db4ffeb6bfbb9\") " pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:17.040685 kubelet[3556]: I0421 10:17:17.040625 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:17.040685 kubelet[3556]: I0421 10:17:17.040682 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:17.041218 kubelet[3556]: I0421 10:17:17.040708 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:17.041218 kubelet[3556]: I0421 10:17:17.040826 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/431d5a200bf477df85cbb4546ba5b32b-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-26\" (UID: \"431d5a200bf477df85cbb4546ba5b32b\") " pod="kube-system/kube-scheduler-ip-172-31-28-26" Apr 21 10:17:17.041218 kubelet[3556]: I0421 10:17:17.040854 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4efb89d8de1589573e7db4ffeb6bfbb9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-26\" (UID: \"4efb89d8de1589573e7db4ffeb6bfbb9\") " pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:17.041218 kubelet[3556]: I0421 10:17:17.040874 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:17.041218 kubelet[3556]: I0421 10:17:17.040899 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a690e564996de536f0dd45135e2ee1c8-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-26\" (UID: \"a690e564996de536f0dd45135e2ee1c8\") " pod="kube-system/kube-controller-manager-ip-172-31-28-26" Apr 21 10:17:17.041466 kubelet[3556]: I0421 10:17:17.040920 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4efb89d8de1589573e7db4ffeb6bfbb9-ca-certs\") pod \"kube-apiserver-ip-172-31-28-26\" (UID: \"4efb89d8de1589573e7db4ffeb6bfbb9\") " pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:17.579788 kubelet[3556]: I0421 10:17:17.579568 3556 apiserver.go:52] "Watching apiserver" Apr 21 10:17:17.638433 kubelet[3556]: I0421 10:17:17.638360 3556 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:17:17.710979 kubelet[3556]: I0421 10:17:17.710048 3556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:17.710979 kubelet[3556]: I0421 10:17:17.710773 3556 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-26" Apr 21 10:17:17.720269 kubelet[3556]: E0421 10:17:17.719943 3556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-26\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-26" Apr 21 10:17:17.720493 kubelet[3556]: E0421 10:17:17.720471 3556 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-26\" already exists" pod="kube-system/kube-scheduler-ip-172-31-28-26" Apr 21 10:17:17.741517 kubelet[3556]: I0421 10:17:17.740999 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-26" podStartSLOduration=1.740981763 podStartE2EDuration="1.740981763s" podCreationTimestamp="2026-04-21 10:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:17.740601037 +0000 UTC m=+1.274703555" watchObservedRunningTime="2026-04-21 10:17:17.740981763 +0000 UTC m=+1.275084241" Apr 21 10:17:17.769512 kubelet[3556]: I0421 10:17:17.769430 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-26" podStartSLOduration=1.769408348 podStartE2EDuration="1.769408348s" podCreationTimestamp="2026-04-21 10:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:17.755139594 +0000 UTC m=+1.289242069" watchObservedRunningTime="2026-04-21 10:17:17.769408348 +0000 UTC m=+1.303510825" Apr 21 10:17:17.783539 kubelet[3556]: I0421 10:17:17.783435 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-26" podStartSLOduration=1.7834175060000002 podStartE2EDuration="1.783417506s" podCreationTimestamp="2026-04-21 10:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:17.770747578 +0000 UTC m=+1.304850052" watchObservedRunningTime="2026-04-21 10:17:17.783417506 +0000 UTC m=+1.317519983" Apr 21 10:17:21.361667 kubelet[3556]: I0421 10:17:21.361624 3556 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:17:21.362513 kubelet[3556]: I0421 10:17:21.362354 3556 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:17:21.362618 containerd[2109]: time="2026-04-21T10:17:21.362139032Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:17:22.180692 kubelet[3556]: I0421 10:17:22.180647 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46ee4887-fab7-41e1-bccd-3dfbd9b3b98c-lib-modules\") pod \"kube-proxy-kpplv\" (UID: \"46ee4887-fab7-41e1-bccd-3dfbd9b3b98c\") " pod="kube-system/kube-proxy-kpplv" Apr 21 10:17:22.180692 kubelet[3556]: I0421 10:17:22.180697 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxmc\" (UniqueName: \"kubernetes.io/projected/46ee4887-fab7-41e1-bccd-3dfbd9b3b98c-kube-api-access-qwxmc\") pod \"kube-proxy-kpplv\" (UID: \"46ee4887-fab7-41e1-bccd-3dfbd9b3b98c\") " pod="kube-system/kube-proxy-kpplv" Apr 21 10:17:22.180692 kubelet[3556]: I0421 10:17:22.180826 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46ee4887-fab7-41e1-bccd-3dfbd9b3b98c-kube-proxy\") pod \"kube-proxy-kpplv\" (UID: \"46ee4887-fab7-41e1-bccd-3dfbd9b3b98c\") " pod="kube-system/kube-proxy-kpplv" Apr 21 10:17:22.180692 kubelet[3556]: I0421 10:17:22.180848 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46ee4887-fab7-41e1-bccd-3dfbd9b3b98c-xtables-lock\") pod \"kube-proxy-kpplv\" (UID: \"46ee4887-fab7-41e1-bccd-3dfbd9b3b98c\") " pod="kube-system/kube-proxy-kpplv" Apr 21 10:17:22.460936 containerd[2109]: time="2026-04-21T10:17:22.460364920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpplv,Uid:46ee4887-fab7-41e1-bccd-3dfbd9b3b98c,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:22.511665 containerd[2109]: time="2026-04-21T10:17:22.510483367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:22.511665 containerd[2109]: time="2026-04-21T10:17:22.510569971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:22.511665 containerd[2109]: time="2026-04-21T10:17:22.510591521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:22.511665 containerd[2109]: time="2026-04-21T10:17:22.510717434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:22.586856 kubelet[3556]: I0421 10:17:22.585308 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11d6038b-a682-46b8-939f-9f66e3fe1c0f-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-pvqs5\" (UID: \"11d6038b-a682-46b8-939f-9f66e3fe1c0f\") " pod="tigera-operator/tigera-operator-6bf85f8dd-pvqs5" Apr 21 10:17:22.586856 kubelet[3556]: I0421 10:17:22.585369 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcvhn\" (UniqueName: \"kubernetes.io/projected/11d6038b-a682-46b8-939f-9f66e3fe1c0f-kube-api-access-tcvhn\") pod \"tigera-operator-6bf85f8dd-pvqs5\" (UID: \"11d6038b-a682-46b8-939f-9f66e3fe1c0f\") " pod="tigera-operator/tigera-operator-6bf85f8dd-pvqs5" Apr 21 10:17:22.633066 containerd[2109]: time="2026-04-21T10:17:22.631844133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kpplv,Uid:46ee4887-fab7-41e1-bccd-3dfbd9b3b98c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2719b48fe721107c3c47fd290cf5001d821f9e401e75d5adb31ad5f9a91b572c\"" Apr 21 10:17:22.638685 containerd[2109]: time="2026-04-21T10:17:22.638449606Z" level=info msg="CreateContainer within sandbox \"2719b48fe721107c3c47fd290cf5001d821f9e401e75d5adb31ad5f9a91b572c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:17:22.666590 containerd[2109]: time="2026-04-21T10:17:22.666541999Z" level=info msg="CreateContainer within sandbox \"2719b48fe721107c3c47fd290cf5001d821f9e401e75d5adb31ad5f9a91b572c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6d5aa0ce9dd5362a902c44c30fb030990b38312e15ae37a7e0d347cfa1a4efc3\"" Apr 21 10:17:22.667282 containerd[2109]: time="2026-04-21T10:17:22.667172592Z" level=info msg="StartContainer for \"6d5aa0ce9dd5362a902c44c30fb030990b38312e15ae37a7e0d347cfa1a4efc3\"" Apr 21 10:17:22.733943 containerd[2109]: time="2026-04-21T10:17:22.733691277Z" level=info msg="StartContainer for \"6d5aa0ce9dd5362a902c44c30fb030990b38312e15ae37a7e0d347cfa1a4efc3\" returns successfully" Apr 21 10:17:22.920562 containerd[2109]: time="2026-04-21T10:17:22.920511919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-pvqs5,Uid:11d6038b-a682-46b8-939f-9f66e3fe1c0f,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:17:22.952075 containerd[2109]: time="2026-04-21T10:17:22.951881316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:22.952075 containerd[2109]: time="2026-04-21T10:17:22.951983931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:22.952075 containerd[2109]: time="2026-04-21T10:17:22.952006974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:22.952402 containerd[2109]: time="2026-04-21T10:17:22.952177337Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:23.012546 containerd[2109]: time="2026-04-21T10:17:23.012074429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-pvqs5,Uid:11d6038b-a682-46b8-939f-9f66e3fe1c0f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3cff99a600f5bb0c6999afcc04e4d08d01cff54dd2ef862331680bdaa73500d5\"" Apr 21 10:17:23.015305 containerd[2109]: time="2026-04-21T10:17:23.015003626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:17:23.307792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016971733.mount: Deactivated successfully. Apr 21 10:17:23.803685 kubelet[3556]: I0421 10:17:23.803523 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kpplv" podStartSLOduration=1.80350012 podStartE2EDuration="1.80350012s" podCreationTimestamp="2026-04-21 10:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:17:23.741621494 +0000 UTC m=+7.275723971" watchObservedRunningTime="2026-04-21 10:17:23.80350012 +0000 UTC m=+7.337602597" Apr 21 10:17:24.245751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1424668243.mount: Deactivated successfully. Apr 21 10:17:25.664938 containerd[2109]: time="2026-04-21T10:17:25.664882664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:25.666359 containerd[2109]: time="2026-04-21T10:17:25.666190907Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:17:25.667998 containerd[2109]: time="2026-04-21T10:17:25.667484175Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:25.670632 containerd[2109]: time="2026-04-21T10:17:25.670577674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:25.671518 containerd[2109]: time="2026-04-21T10:17:25.671337601Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.655545466s" Apr 21 10:17:25.671518 containerd[2109]: time="2026-04-21T10:17:25.671412530Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:17:25.677598 containerd[2109]: time="2026-04-21T10:17:25.677557507Z" level=info msg="CreateContainer within sandbox \"3cff99a600f5bb0c6999afcc04e4d08d01cff54dd2ef862331680bdaa73500d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:17:25.694679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3806692125.mount: Deactivated successfully. Apr 21 10:17:25.696520 containerd[2109]: time="2026-04-21T10:17:25.696479242Z" level=info msg="CreateContainer within sandbox \"3cff99a600f5bb0c6999afcc04e4d08d01cff54dd2ef862331680bdaa73500d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481\"" Apr 21 10:17:25.700124 containerd[2109]: time="2026-04-21T10:17:25.698335463Z" level=info msg="StartContainer for \"5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481\"" Apr 21 10:17:25.764841 containerd[2109]: time="2026-04-21T10:17:25.764788882Z" level=info msg="StartContainer for \"5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481\" returns successfully" Apr 21 10:17:26.993883 kubelet[3556]: I0421 10:17:26.993811 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-pvqs5" podStartSLOduration=2.3354951 podStartE2EDuration="4.9937896s" podCreationTimestamp="2026-04-21 10:17:22 +0000 UTC" firstStartedPulling="2026-04-21 10:17:23.014212984 +0000 UTC m=+6.548315453" lastFinishedPulling="2026-04-21 10:17:25.672507485 +0000 UTC m=+9.206609953" observedRunningTime="2026-04-21 10:17:26.752315803 +0000 UTC m=+10.286418277" watchObservedRunningTime="2026-04-21 10:17:26.9937896 +0000 UTC m=+10.527892137" Apr 21 10:17:33.370532 sudo[2449]: pam_unix(sudo:session): session closed for user root Apr 21 10:17:33.538400 sshd[2445]: pam_unix(sshd:session): session closed for user core Apr 21 10:17:33.546263 systemd[1]: sshd@6-172.31.28.26:22-50.85.169.122:42132.service: Deactivated successfully. Apr 21 10:17:33.561180 systemd-logind[2075]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:17:33.562596 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:17:33.568090 systemd-logind[2075]: Removed session 7. Apr 21 10:17:37.508150 kubelet[3556]: I0421 10:17:37.507984 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmwc5\" (UniqueName: \"kubernetes.io/projected/2c02a821-7e26-4c2c-ba5d-557991709f97-kube-api-access-rmwc5\") pod \"calico-typha-6bd8788675-t2cp2\" (UID: \"2c02a821-7e26-4c2c-ba5d-557991709f97\") " pod="calico-system/calico-typha-6bd8788675-t2cp2" Apr 21 10:17:37.508150 kubelet[3556]: I0421 10:17:37.508045 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2c02a821-7e26-4c2c-ba5d-557991709f97-typha-certs\") pod \"calico-typha-6bd8788675-t2cp2\" (UID: \"2c02a821-7e26-4c2c-ba5d-557991709f97\") " pod="calico-system/calico-typha-6bd8788675-t2cp2" Apr 21 10:17:37.508150 kubelet[3556]: I0421 10:17:37.508077 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c02a821-7e26-4c2c-ba5d-557991709f97-tigera-ca-bundle\") pod \"calico-typha-6bd8788675-t2cp2\" (UID: \"2c02a821-7e26-4c2c-ba5d-557991709f97\") " pod="calico-system/calico-typha-6bd8788675-t2cp2" Apr 21 10:17:37.608536 kubelet[3556]: I0421 10:17:37.608451 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-cni-net-dir\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.608536 kubelet[3556]: I0421 10:17:37.608507 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-xtables-lock\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609349 kubelet[3556]: I0421 10:17:37.608554 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-cni-bin-dir\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609349 kubelet[3556]: I0421 10:17:37.608585 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-nodeproc\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609349 kubelet[3556]: I0421 10:17:37.608613 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-bpffs\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609349 kubelet[3556]: I0421 10:17:37.608636 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-var-lib-calico\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609349 kubelet[3556]: I0421 10:17:37.608662 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-flexvol-driver-host\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609639 kubelet[3556]: I0421 10:17:37.608853 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-lib-modules\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609639 kubelet[3556]: I0421 10:17:37.608874 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5438f94a-af8b-493f-979c-c14fde7da6dc-node-certs\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609639 kubelet[3556]: I0421 10:17:37.608915 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5438f94a-af8b-493f-979c-c14fde7da6dc-tigera-ca-bundle\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609639 kubelet[3556]: I0421 10:17:37.608958 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-var-run-calico\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609639 kubelet[3556]: I0421 10:17:37.608984 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs7c8\" (UniqueName: \"kubernetes.io/projected/5438f94a-af8b-493f-979c-c14fde7da6dc-kube-api-access-vs7c8\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609868 kubelet[3556]: I0421 10:17:37.609035 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-cni-log-dir\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609868 kubelet[3556]: I0421 10:17:37.609060 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-policysync\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.609868 kubelet[3556]: I0421 10:17:37.609082 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/5438f94a-af8b-493f-979c-c14fde7da6dc-sys-fs\") pod \"calico-node-hk5xg\" (UID: \"5438f94a-af8b-493f-979c-c14fde7da6dc\") " pod="calico-system/calico-node-hk5xg" Apr 21 10:17:37.669084 kubelet[3556]: E0421 10:17:37.668391 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:37.709950 kubelet[3556]: I0421 10:17:37.709897 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/78edf098-c99a-45bc-bf91-cfbe789bd2f5-registration-dir\") pod \"csi-node-driver-rkknl\" (UID: \"78edf098-c99a-45bc-bf91-cfbe789bd2f5\") " pod="calico-system/csi-node-driver-rkknl" Apr 21 10:17:37.710135 kubelet[3556]: I0421 10:17:37.710003 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/78edf098-c99a-45bc-bf91-cfbe789bd2f5-socket-dir\") pod \"csi-node-driver-rkknl\" (UID: \"78edf098-c99a-45bc-bf91-cfbe789bd2f5\") " pod="calico-system/csi-node-driver-rkknl" Apr 21 10:17:37.710135 kubelet[3556]: I0421 10:17:37.710041 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt58l\" (UniqueName: \"kubernetes.io/projected/78edf098-c99a-45bc-bf91-cfbe789bd2f5-kube-api-access-xt58l\") pod \"csi-node-driver-rkknl\" (UID: \"78edf098-c99a-45bc-bf91-cfbe789bd2f5\") " pod="calico-system/csi-node-driver-rkknl" Apr 21 10:17:37.710135 kubelet[3556]: I0421 10:17:37.710083 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78edf098-c99a-45bc-bf91-cfbe789bd2f5-kubelet-dir\") pod \"csi-node-driver-rkknl\" (UID: \"78edf098-c99a-45bc-bf91-cfbe789bd2f5\") " pod="calico-system/csi-node-driver-rkknl" Apr 21 10:17:37.710266 kubelet[3556]: I0421 10:17:37.710144 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/78edf098-c99a-45bc-bf91-cfbe789bd2f5-varrun\") pod \"csi-node-driver-rkknl\" (UID: \"78edf098-c99a-45bc-bf91-cfbe789bd2f5\") " pod="calico-system/csi-node-driver-rkknl" Apr 21 10:17:37.726528 kubelet[3556]: E0421 10:17:37.726490 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.726644 kubelet[3556]: W0421 10:17:37.726534 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.726644 kubelet[3556]: E0421 10:17:37.726559 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.728060 kubelet[3556]: E0421 10:17:37.727845 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.728060 kubelet[3556]: W0421 10:17:37.727866 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.728060 kubelet[3556]: E0421 10:17:37.727886 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.750076 kubelet[3556]: E0421 10:17:37.747429 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.750076 kubelet[3556]: W0421 10:17:37.747459 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.750076 kubelet[3556]: E0421 10:17:37.747484 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.766572 containerd[2109]: time="2026-04-21T10:17:37.766402959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd8788675-t2cp2,Uid:2c02a821-7e26-4c2c-ba5d-557991709f97,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:37.812957 kubelet[3556]: E0421 10:17:37.811384 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.812957 kubelet[3556]: W0421 10:17:37.811630 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.812957 kubelet[3556]: E0421 10:17:37.811668 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.812957 kubelet[3556]: E0421 10:17:37.812614 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.812957 kubelet[3556]: W0421 10:17:37.812631 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.812957 kubelet[3556]: E0421 10:17:37.812651 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.814306 kubelet[3556]: E0421 10:17:37.814089 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.814306 kubelet[3556]: W0421 10:17:37.814105 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.814306 kubelet[3556]: E0421 10:17:37.814124 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.815001 kubelet[3556]: E0421 10:17:37.814739 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.815001 kubelet[3556]: W0421 10:17:37.814752 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.815001 kubelet[3556]: E0421 10:17:37.814765 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.815251 kubelet[3556]: E0421 10:17:37.815239 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.815398 kubelet[3556]: W0421 10:17:37.815318 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.815398 kubelet[3556]: E0421 10:17:37.815336 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.815850 kubelet[3556]: E0421 10:17:37.815724 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.815850 kubelet[3556]: W0421 10:17:37.815746 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.815850 kubelet[3556]: E0421 10:17:37.815759 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.816350 kubelet[3556]: E0421 10:17:37.816205 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.816350 kubelet[3556]: W0421 10:17:37.816218 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.816350 kubelet[3556]: E0421 10:17:37.816230 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.816829 kubelet[3556]: E0421 10:17:37.816647 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.816829 kubelet[3556]: W0421 10:17:37.816659 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.816829 kubelet[3556]: E0421 10:17:37.816685 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.817286 kubelet[3556]: E0421 10:17:37.817176 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.817286 kubelet[3556]: W0421 10:17:37.817189 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.817286 kubelet[3556]: E0421 10:17:37.817202 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.817750 kubelet[3556]: E0421 10:17:37.817665 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.817750 kubelet[3556]: W0421 10:17:37.817678 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.817750 kubelet[3556]: E0421 10:17:37.817691 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.818311 kubelet[3556]: E0421 10:17:37.818111 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.818311 kubelet[3556]: W0421 10:17:37.818124 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.818311 kubelet[3556]: E0421 10:17:37.818136 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.818659 kubelet[3556]: E0421 10:17:37.818598 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.818659 kubelet[3556]: W0421 10:17:37.818614 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.818659 kubelet[3556]: E0421 10:17:37.818629 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.819210 kubelet[3556]: E0421 10:17:37.819125 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.819210 kubelet[3556]: W0421 10:17:37.819139 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.819210 kubelet[3556]: E0421 10:17:37.819152 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.819846 kubelet[3556]: E0421 10:17:37.819640 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.819846 kubelet[3556]: W0421 10:17:37.819653 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.819846 kubelet[3556]: E0421 10:17:37.819666 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.820192 kubelet[3556]: E0421 10:17:37.820103 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.820192 kubelet[3556]: W0421 10:17:37.820116 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.820192 kubelet[3556]: E0421 10:17:37.820130 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.820679 kubelet[3556]: E0421 10:17:37.820558 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.820679 kubelet[3556]: W0421 10:17:37.820570 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.820679 kubelet[3556]: E0421 10:17:37.820583 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.821222 kubelet[3556]: E0421 10:17:37.821057 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.821222 kubelet[3556]: W0421 10:17:37.821071 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.821222 kubelet[3556]: E0421 10:17:37.821083 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.823172 kubelet[3556]: E0421 10:17:37.823159 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.823367 kubelet[3556]: W0421 10:17:37.823248 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.823367 kubelet[3556]: E0421 10:17:37.823266 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.823680 kubelet[3556]: E0421 10:17:37.823639 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.823680 kubelet[3556]: W0421 10:17:37.823652 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.823680 kubelet[3556]: E0421 10:17:37.823665 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.832902 kubelet[3556]: E0421 10:17:37.830313 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.832902 kubelet[3556]: W0421 10:17:37.830339 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.832902 kubelet[3556]: E0421 10:17:37.830365 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.832902 kubelet[3556]: E0421 10:17:37.830751 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.832902 kubelet[3556]: W0421 10:17:37.830768 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.832902 kubelet[3556]: E0421 10:17:37.830786 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.832902 kubelet[3556]: E0421 10:17:37.832755 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.832902 kubelet[3556]: W0421 10:17:37.832775 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.832902 kubelet[3556]: E0421 10:17:37.832794 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.833633 kubelet[3556]: E0421 10:17:37.833616 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.833817 kubelet[3556]: W0421 10:17:37.833769 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.833817 kubelet[3556]: E0421 10:17:37.833791 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.835268 kubelet[3556]: E0421 10:17:37.835252 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.836045 kubelet[3556]: W0421 10:17:37.835450 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.836045 kubelet[3556]: E0421 10:17:37.835473 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.840115 kubelet[3556]: E0421 10:17:37.839345 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.840115 kubelet[3556]: W0421 10:17:37.839370 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.840115 kubelet[3556]: E0421 10:17:37.839394 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.846485 kubelet[3556]: E0421 10:17:37.846394 3556 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:17:37.846485 kubelet[3556]: W0421 10:17:37.846417 3556 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:17:37.846485 kubelet[3556]: E0421 10:17:37.846439 3556 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:17:37.871170 containerd[2109]: time="2026-04-21T10:17:37.870623351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:37.871170 containerd[2109]: time="2026-04-21T10:17:37.870722271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:37.871170 containerd[2109]: time="2026-04-21T10:17:37.870745597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:37.871170 containerd[2109]: time="2026-04-21T10:17:37.870970061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:37.879436 containerd[2109]: time="2026-04-21T10:17:37.879194741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hk5xg,Uid:5438f94a-af8b-493f-979c-c14fde7da6dc,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:37.930542 containerd[2109]: time="2026-04-21T10:17:37.930101377Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:37.930542 containerd[2109]: time="2026-04-21T10:17:37.930246493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:37.930542 containerd[2109]: time="2026-04-21T10:17:37.930271795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:37.930542 containerd[2109]: time="2026-04-21T10:17:37.930428721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:37.990836 containerd[2109]: time="2026-04-21T10:17:37.990695643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hk5xg,Uid:5438f94a-af8b-493f-979c-c14fde7da6dc,Namespace:calico-system,Attempt:0,} returns sandbox id \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\"" Apr 21 10:17:37.993540 containerd[2109]: time="2026-04-21T10:17:37.993424293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:17:37.998078 containerd[2109]: time="2026-04-21T10:17:37.995600803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bd8788675-t2cp2,Uid:2c02a821-7e26-4c2c-ba5d-557991709f97,Namespace:calico-system,Attempt:0,} returns sandbox id \"c1e23ed89095e22de19a663c758cf284e08f1522f49859f3e892cf83df1ac139\"" Apr 21 10:17:39.615694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4001177398.mount: Deactivated successfully. Apr 21 10:17:39.660303 kubelet[3556]: E0421 10:17:39.660241 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:39.728070 containerd[2109]: time="2026-04-21T10:17:39.727980639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:39.729361 containerd[2109]: time="2026-04-21T10:17:39.729308721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 21 10:17:39.730691 containerd[2109]: time="2026-04-21T10:17:39.730632515Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:39.733762 containerd[2109]: time="2026-04-21T10:17:39.733723888Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:39.736982 containerd[2109]: time="2026-04-21T10:17:39.734674141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.740924804s" Apr 21 10:17:39.736982 containerd[2109]: time="2026-04-21T10:17:39.735649107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:17:39.745090 containerd[2109]: time="2026-04-21T10:17:39.745051729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:17:39.749217 containerd[2109]: time="2026-04-21T10:17:39.749178181Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:17:39.820881 containerd[2109]: time="2026-04-21T10:17:39.818780582Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ba1379b3499d427a809979ffeff6bee9db7e96b829af04f8ad043e89b37ff8f\"" Apr 21 10:17:39.821281 containerd[2109]: time="2026-04-21T10:17:39.821251716Z" level=info msg="StartContainer for \"2ba1379b3499d427a809979ffeff6bee9db7e96b829af04f8ad043e89b37ff8f\"" Apr 21 10:17:39.898309 containerd[2109]: time="2026-04-21T10:17:39.898115617Z" level=info msg="StartContainer for \"2ba1379b3499d427a809979ffeff6bee9db7e96b829af04f8ad043e89b37ff8f\" returns successfully" Apr 21 10:17:39.946985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ba1379b3499d427a809979ffeff6bee9db7e96b829af04f8ad043e89b37ff8f-rootfs.mount: Deactivated successfully. Apr 21 10:17:39.981689 containerd[2109]: time="2026-04-21T10:17:39.959441205Z" level=info msg="shim disconnected" id=2ba1379b3499d427a809979ffeff6bee9db7e96b829af04f8ad043e89b37ff8f namespace=k8s.io Apr 21 10:17:39.981689 containerd[2109]: time="2026-04-21T10:17:39.981692618Z" level=warning msg="cleaning up after shim disconnected" id=2ba1379b3499d427a809979ffeff6bee9db7e96b829af04f8ad043e89b37ff8f namespace=k8s.io Apr 21 10:17:39.982001 containerd[2109]: time="2026-04-21T10:17:39.981713021Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:17:39.996295 containerd[2109]: time="2026-04-21T10:17:39.996243829Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:17:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:17:41.658627 kubelet[3556]: E0421 10:17:41.658579 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:42.793542 containerd[2109]: time="2026-04-21T10:17:42.793488901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:42.795053 containerd[2109]: time="2026-04-21T10:17:42.794901975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 21 10:17:42.796516 containerd[2109]: time="2026-04-21T10:17:42.796449339Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:42.803145 containerd[2109]: time="2026-04-21T10:17:42.803069569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:42.804224 containerd[2109]: time="2026-04-21T10:17:42.803935793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.058688667s" Apr 21 10:17:42.804224 containerd[2109]: time="2026-04-21T10:17:42.803980286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:17:42.806176 containerd[2109]: time="2026-04-21T10:17:42.806147873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:17:42.829272 containerd[2109]: time="2026-04-21T10:17:42.829228109Z" level=info msg="CreateContainer within sandbox \"c1e23ed89095e22de19a663c758cf284e08f1522f49859f3e892cf83df1ac139\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:17:42.852767 containerd[2109]: time="2026-04-21T10:17:42.852469717Z" level=info msg="CreateContainer within sandbox \"c1e23ed89095e22de19a663c758cf284e08f1522f49859f3e892cf83df1ac139\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5d549700c084b25f76b3dd43fe8f138187ec528ea638048c6992d6f8c5c40b6f\"" Apr 21 10:17:42.855613 containerd[2109]: time="2026-04-21T10:17:42.855554089Z" level=info msg="StartContainer for \"5d549700c084b25f76b3dd43fe8f138187ec528ea638048c6992d6f8c5c40b6f\"" Apr 21 10:17:42.856181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120656179.mount: Deactivated successfully. Apr 21 10:17:42.941781 containerd[2109]: time="2026-04-21T10:17:42.941712495Z" level=info msg="StartContainer for \"5d549700c084b25f76b3dd43fe8f138187ec528ea638048c6992d6f8c5c40b6f\" returns successfully" Apr 21 10:17:43.658717 kubelet[3556]: E0421 10:17:43.658648 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:44.844511 kubelet[3556]: I0421 10:17:44.844478 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:17:45.659510 kubelet[3556]: E0421 10:17:45.659263 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:47.658534 kubelet[3556]: E0421 10:17:47.658474 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:49.658549 kubelet[3556]: E0421 10:17:49.658354 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:50.361361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount244393153.mount: Deactivated successfully. Apr 21 10:17:50.443265 containerd[2109]: time="2026-04-21T10:17:50.429621984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:17:50.443745 containerd[2109]: time="2026-04-21T10:17:50.429222372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:50.453605 containerd[2109]: time="2026-04-21T10:17:50.453559171Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:50.456837 containerd[2109]: time="2026-04-21T10:17:50.456758234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:50.458437 containerd[2109]: time="2026-04-21T10:17:50.457817514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.651618275s" Apr 21 10:17:50.458437 containerd[2109]: time="2026-04-21T10:17:50.457866960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:17:50.484165 containerd[2109]: time="2026-04-21T10:17:50.484120722Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:17:50.514400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740073983.mount: Deactivated successfully. Apr 21 10:17:50.522038 containerd[2109]: time="2026-04-21T10:17:50.521971174Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"63d8ef26503481a135c8976411828600da8484fba9a9c969e9dab714b340a611\"" Apr 21 10:17:50.524812 containerd[2109]: time="2026-04-21T10:17:50.522875026Z" level=info msg="StartContainer for \"63d8ef26503481a135c8976411828600da8484fba9a9c969e9dab714b340a611\"" Apr 21 10:17:50.615116 containerd[2109]: time="2026-04-21T10:17:50.614070171Z" level=info msg="StartContainer for \"63d8ef26503481a135c8976411828600da8484fba9a9c969e9dab714b340a611\" returns successfully" Apr 21 10:17:50.789961 containerd[2109]: time="2026-04-21T10:17:50.784817065Z" level=info msg="shim disconnected" id=63d8ef26503481a135c8976411828600da8484fba9a9c969e9dab714b340a611 namespace=k8s.io Apr 21 10:17:50.790283 containerd[2109]: time="2026-04-21T10:17:50.790253286Z" level=warning msg="cleaning up after shim disconnected" id=63d8ef26503481a135c8976411828600da8484fba9a9c969e9dab714b340a611 namespace=k8s.io Apr 21 10:17:50.790394 containerd[2109]: time="2026-04-21T10:17:50.790365996Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:17:50.886322 containerd[2109]: time="2026-04-21T10:17:50.885831880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:17:50.916324 kubelet[3556]: I0421 10:17:50.913483 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bd8788675-t2cp2" podStartSLOduration=9.085604338 podStartE2EDuration="13.889998955s" podCreationTimestamp="2026-04-21 10:17:37 +0000 UTC" firstStartedPulling="2026-04-21 10:17:38.000730665 +0000 UTC m=+21.534833144" lastFinishedPulling="2026-04-21 10:17:42.805125292 +0000 UTC m=+26.339227761" observedRunningTime="2026-04-21 10:17:43.855205135 +0000 UTC m=+27.389307614" watchObservedRunningTime="2026-04-21 10:17:50.889998955 +0000 UTC m=+34.424101431" Apr 21 10:17:51.361367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63d8ef26503481a135c8976411828600da8484fba9a9c969e9dab714b340a611-rootfs.mount: Deactivated successfully. Apr 21 10:17:51.659497 kubelet[3556]: E0421 10:17:51.659345 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:53.659364 kubelet[3556]: E0421 10:17:53.659296 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:54.575825 containerd[2109]: time="2026-04-21T10:17:54.575773609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:54.577201 containerd[2109]: time="2026-04-21T10:17:54.577061273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:17:54.581076 containerd[2109]: time="2026-04-21T10:17:54.580163180Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:54.583755 containerd[2109]: time="2026-04-21T10:17:54.583715600Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:17:54.584583 containerd[2109]: time="2026-04-21T10:17:54.584548297Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.698664877s" Apr 21 10:17:54.584854 containerd[2109]: time="2026-04-21T10:17:54.584830825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:17:54.589407 containerd[2109]: time="2026-04-21T10:17:54.589262042Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:17:54.608427 containerd[2109]: time="2026-04-21T10:17:54.608355862Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"392e5069c3ed9dd20c5e242704137ceb83f3d93273be2c8b1081cbc4e92b7bf6\"" Apr 21 10:17:54.624251 containerd[2109]: time="2026-04-21T10:17:54.624202692Z" level=info msg="StartContainer for \"392e5069c3ed9dd20c5e242704137ceb83f3d93273be2c8b1081cbc4e92b7bf6\"" Apr 21 10:17:54.716791 containerd[2109]: time="2026-04-21T10:17:54.716743247Z" level=info msg="StartContainer for \"392e5069c3ed9dd20c5e242704137ceb83f3d93273be2c8b1081cbc4e92b7bf6\" returns successfully" Apr 21 10:17:54.722609 kubelet[3556]: I0421 10:17:54.722560 3556 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:17:55.659879 kubelet[3556]: E0421 10:17:55.659811 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rkknl" podUID="78edf098-c99a-45bc-bf91-cfbe789bd2f5" Apr 21 10:17:55.881269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-392e5069c3ed9dd20c5e242704137ceb83f3d93273be2c8b1081cbc4e92b7bf6-rootfs.mount: Deactivated successfully. Apr 21 10:17:55.889772 containerd[2109]: time="2026-04-21T10:17:55.889700069Z" level=info msg="shim disconnected" id=392e5069c3ed9dd20c5e242704137ceb83f3d93273be2c8b1081cbc4e92b7bf6 namespace=k8s.io Apr 21 10:17:55.889772 containerd[2109]: time="2026-04-21T10:17:55.889767506Z" level=warning msg="cleaning up after shim disconnected" id=392e5069c3ed9dd20c5e242704137ceb83f3d93273be2c8b1081cbc4e92b7bf6 namespace=k8s.io Apr 21 10:17:55.890550 containerd[2109]: time="2026-04-21T10:17:55.889778752Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:17:55.909201 containerd[2109]: time="2026-04-21T10:17:55.909153686Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:17:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:17:55.924113 kubelet[3556]: I0421 10:17:55.905771 3556 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:17:56.035824 containerd[2109]: time="2026-04-21T10:17:56.035722183Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:17:56.091948 containerd[2109]: time="2026-04-21T10:17:56.091744730Z" level=info msg="CreateContainer within sandbox \"90e8a895c1c4b2bddbd8ea0d017a4918f685a392ba85623f061f5ee76dcb1eb8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"984cff018b863c39e4908377efeff55a14f51f76ffd64ad76130da6dd5e3e1de\"" Apr 21 10:17:56.099051 containerd[2109]: time="2026-04-21T10:17:56.097668865Z" level=info msg="StartContainer for \"984cff018b863c39e4908377efeff55a14f51f76ffd64ad76130da6dd5e3e1de\"" Apr 21 10:17:56.207392 containerd[2109]: time="2026-04-21T10:17:56.207188073Z" level=info msg="StartContainer for \"984cff018b863c39e4908377efeff55a14f51f76ffd64ad76130da6dd5e3e1de\" returns successfully" Apr 21 10:17:56.223896 kubelet[3556]: I0421 10:17:56.223641 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3fc0e5e1-29eb-4eba-bbc3-f696b0a92007-config-volume\") pod \"coredns-674b8bbfcf-lpv22\" (UID: \"3fc0e5e1-29eb-4eba-bbc3-f696b0a92007\") " pod="kube-system/coredns-674b8bbfcf-lpv22" Apr 21 10:17:56.223896 kubelet[3556]: I0421 10:17:56.223772 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmfrk\" (UniqueName: \"kubernetes.io/projected/3fc0e5e1-29eb-4eba-bbc3-f696b0a92007-kube-api-access-gmfrk\") pod \"coredns-674b8bbfcf-lpv22\" (UID: \"3fc0e5e1-29eb-4eba-bbc3-f696b0a92007\") " pod="kube-system/coredns-674b8bbfcf-lpv22" Apr 21 10:17:56.325183 kubelet[3556]: I0421 10:17:56.325136 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-whisker-ca-bundle\") pod \"whisker-58fc5fcf6d-w6pl7\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " pod="calico-system/whisker-58fc5fcf6d-w6pl7" Apr 21 10:17:56.325183 kubelet[3556]: I0421 10:17:56.325184 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/befe1eda-78f2-4643-854f-76cc3bc600cc-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-9f7nn\" (UID: \"befe1eda-78f2-4643-854f-76cc3bc600cc\") " pod="calico-system/goldmane-5b85766d88-9f7nn" Apr 21 10:17:56.325402 kubelet[3556]: I0421 10:17:56.325213 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/befe1eda-78f2-4643-854f-76cc3bc600cc-goldmane-key-pair\") pod \"goldmane-5b85766d88-9f7nn\" (UID: \"befe1eda-78f2-4643-854f-76cc3bc600cc\") " pod="calico-system/goldmane-5b85766d88-9f7nn" Apr 21 10:17:56.325402 kubelet[3556]: I0421 10:17:56.325233 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-nginx-config\") pod \"whisker-58fc5fcf6d-w6pl7\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " pod="calico-system/whisker-58fc5fcf6d-w6pl7" Apr 21 10:17:56.325402 kubelet[3556]: I0421 10:17:56.325255 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfgmw\" (UniqueName: \"kubernetes.io/projected/97d14944-e822-477b-a225-60b22c64b8f0-kube-api-access-hfgmw\") pod \"whisker-58fc5fcf6d-w6pl7\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " pod="calico-system/whisker-58fc5fcf6d-w6pl7" Apr 21 10:17:56.325402 kubelet[3556]: I0421 10:17:56.325275 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a490155-4011-4010-b8d7-bb01de1814bf-config-volume\") pod \"coredns-674b8bbfcf-n6ppd\" (UID: \"9a490155-4011-4010-b8d7-bb01de1814bf\") " pod="kube-system/coredns-674b8bbfcf-n6ppd" Apr 21 10:17:56.325402 kubelet[3556]: I0421 10:17:56.325304 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hl27r\" (UniqueName: \"kubernetes.io/projected/9a490155-4011-4010-b8d7-bb01de1814bf-kube-api-access-hl27r\") pod \"coredns-674b8bbfcf-n6ppd\" (UID: \"9a490155-4011-4010-b8d7-bb01de1814bf\") " pod="kube-system/coredns-674b8bbfcf-n6ppd" Apr 21 10:17:56.325628 kubelet[3556]: I0421 10:17:56.325342 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/befe1eda-78f2-4643-854f-76cc3bc600cc-config\") pod \"goldmane-5b85766d88-9f7nn\" (UID: \"befe1eda-78f2-4643-854f-76cc3bc600cc\") " pod="calico-system/goldmane-5b85766d88-9f7nn" Apr 21 10:17:56.325628 kubelet[3556]: I0421 10:17:56.325368 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9kxz\" (UniqueName: \"kubernetes.io/projected/befe1eda-78f2-4643-854f-76cc3bc600cc-kube-api-access-n9kxz\") pod \"goldmane-5b85766d88-9f7nn\" (UID: \"befe1eda-78f2-4643-854f-76cc3bc600cc\") " pod="calico-system/goldmane-5b85766d88-9f7nn" Apr 21 10:17:56.325628 kubelet[3556]: I0421 10:17:56.325395 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhznn\" (UniqueName: \"kubernetes.io/projected/c4b99c43-504c-45a9-acca-981cff89876f-kube-api-access-bhznn\") pod \"calico-apiserver-84d9dbc967-bnvcp\" (UID: \"c4b99c43-504c-45a9-acca-981cff89876f\") " pod="calico-system/calico-apiserver-84d9dbc967-bnvcp" Apr 21 10:17:56.325628 kubelet[3556]: I0421 10:17:56.325437 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c4b99c43-504c-45a9-acca-981cff89876f-calico-apiserver-certs\") pod \"calico-apiserver-84d9dbc967-bnvcp\" (UID: \"c4b99c43-504c-45a9-acca-981cff89876f\") " pod="calico-system/calico-apiserver-84d9dbc967-bnvcp" Apr 21 10:17:56.325628 kubelet[3556]: I0421 10:17:56.325461 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/022e9cdc-d1df-4cf1-836a-2007c1cb8d2f-tigera-ca-bundle\") pod \"calico-kube-controllers-5f6d597596-vzm6n\" (UID: \"022e9cdc-d1df-4cf1-836a-2007c1cb8d2f\") " pod="calico-system/calico-kube-controllers-5f6d597596-vzm6n" Apr 21 10:17:56.325848 kubelet[3556]: I0421 10:17:56.325494 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97d14944-e822-477b-a225-60b22c64b8f0-whisker-backend-key-pair\") pod \"whisker-58fc5fcf6d-w6pl7\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " pod="calico-system/whisker-58fc5fcf6d-w6pl7" Apr 21 10:17:56.325848 kubelet[3556]: I0421 10:17:56.325521 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8tp\" (UniqueName: \"kubernetes.io/projected/022e9cdc-d1df-4cf1-836a-2007c1cb8d2f-kube-api-access-mc8tp\") pod \"calico-kube-controllers-5f6d597596-vzm6n\" (UID: \"022e9cdc-d1df-4cf1-836a-2007c1cb8d2f\") " pod="calico-system/calico-kube-controllers-5f6d597596-vzm6n" Apr 21 10:17:56.325848 kubelet[3556]: I0421 10:17:56.325555 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgbk\" (UniqueName: \"kubernetes.io/projected/b336af44-2e6f-48b3-8a64-c248629bc9bc-kube-api-access-5wgbk\") pod \"calico-apiserver-84d9dbc967-x8phj\" (UID: \"b336af44-2e6f-48b3-8a64-c248629bc9bc\") " pod="calico-system/calico-apiserver-84d9dbc967-x8phj" Apr 21 10:17:56.325848 kubelet[3556]: I0421 10:17:56.325586 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b336af44-2e6f-48b3-8a64-c248629bc9bc-calico-apiserver-certs\") pod \"calico-apiserver-84d9dbc967-x8phj\" (UID: \"b336af44-2e6f-48b3-8a64-c248629bc9bc\") " pod="calico-system/calico-apiserver-84d9dbc967-x8phj" Apr 21 10:17:56.550797 containerd[2109]: time="2026-04-21T10:17:56.550737837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lpv22,Uid:3fc0e5e1-29eb-4eba-bbc3-f696b0a92007,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:56.575288 containerd[2109]: time="2026-04-21T10:17:56.574984451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-x8phj,Uid:b336af44-2e6f-48b3-8a64-c248629bc9bc,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:56.575431 containerd[2109]: time="2026-04-21T10:17:56.575325850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58fc5fcf6d-w6pl7,Uid:97d14944-e822-477b-a225-60b22c64b8f0,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:56.603990 containerd[2109]: time="2026-04-21T10:17:56.603446247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6d597596-vzm6n,Uid:022e9cdc-d1df-4cf1-836a-2007c1cb8d2f,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:56.608491 containerd[2109]: time="2026-04-21T10:17:56.607157273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-bnvcp,Uid:c4b99c43-504c-45a9-acca-981cff89876f,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:56.636328 containerd[2109]: time="2026-04-21T10:17:56.636272004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9f7nn,Uid:befe1eda-78f2-4643-854f-76cc3bc600cc,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:56.669690 containerd[2109]: time="2026-04-21T10:17:56.669213805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n6ppd,Uid:9a490155-4011-4010-b8d7-bb01de1814bf,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:56.977472 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:17:56.977085 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:17:56.977168 systemd-resolved[1988]: Flushed all caches. Apr 21 10:17:57.080561 kubelet[3556]: I0421 10:17:57.080486 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hk5xg" podStartSLOduration=3.487675485 podStartE2EDuration="20.080394362s" podCreationTimestamp="2026-04-21 10:17:37 +0000 UTC" firstStartedPulling="2026-04-21 10:17:37.99308037 +0000 UTC m=+21.527182836" lastFinishedPulling="2026-04-21 10:17:54.585799244 +0000 UTC m=+38.119901713" observedRunningTime="2026-04-21 10:17:57.079871518 +0000 UTC m=+40.613973995" watchObservedRunningTime="2026-04-21 10:17:57.080394362 +0000 UTC m=+40.614496846" Apr 21 10:17:57.667644 containerd[2109]: time="2026-04-21T10:17:57.667582474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkknl,Uid:78edf098-c99a-45bc-bf91-cfbe789bd2f5,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:58.197129 systemd[1]: run-containerd-runc-k8s.io-984cff018b863c39e4908377efeff55a14f51f76ffd64ad76130da6dd5e3e1de-runc.l3d5xx.mount: Deactivated successfully. Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:57.727 [INFO][4524] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:57.728 [INFO][4524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" iface="eth0" netns="/var/run/netns/cni-45f23e47-c42f-61ca-419e-63366c3c83bc" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:57.734 [INFO][4524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" iface="eth0" netns="/var/run/netns/cni-45f23e47-c42f-61ca-419e-63366c3c83bc" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:57.744 [INFO][4524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" iface="eth0" netns="/var/run/netns/cni-45f23e47-c42f-61ca-419e-63366c3c83bc" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:57.744 [INFO][4524] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:57.744 [INFO][4524] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.099 [INFO][4601] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" HandleID="k8s-pod-network.9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.099 [INFO][4601] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.101 [INFO][4601] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.115 [WARNING][4601] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" HandleID="k8s-pod-network.9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.115 [INFO][4601] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" HandleID="k8s-pod-network.9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.117 [INFO][4601] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.203119 containerd[2109]: 2026-04-21 10:17:58.154 [INFO][4524] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b" Apr 21 10:17:58.212990 systemd[1]: run-netns-cni\x2d45f23e47\x2dc42f\x2d61ca\x2d419e\x2d63366c3c83bc.mount: Deactivated successfully. Apr 21 10:17:58.213248 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b-shm.mount: Deactivated successfully. Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:57.784 [INFO][4542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:57.787 [INFO][4542] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" iface="eth0" netns="/var/run/netns/cni-d3ad736a-405d-de13-5e7b-45a53fb4ac24" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:57.788 [INFO][4542] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" iface="eth0" netns="/var/run/netns/cni-d3ad736a-405d-de13-5e7b-45a53fb4ac24" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:57.792 [INFO][4542] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" iface="eth0" netns="/var/run/netns/cni-d3ad736a-405d-de13-5e7b-45a53fb4ac24" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:57.792 [INFO][4542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:57.792 [INFO][4542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.192 [INFO][4631] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" HandleID="k8s-pod-network.d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Workload="ip--172--31--28--26-k8s-whisker--58fc5fcf6d--w6pl7-eth0" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.201 [INFO][4631] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.202 [INFO][4631] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.246 [WARNING][4631] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" HandleID="k8s-pod-network.d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Workload="ip--172--31--28--26-k8s-whisker--58fc5fcf6d--w6pl7-eth0" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.246 [INFO][4631] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" HandleID="k8s-pod-network.d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Workload="ip--172--31--28--26-k8s-whisker--58fc5fcf6d--w6pl7-eth0" Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.248 [INFO][4631] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.268987 containerd[2109]: 2026-04-21 10:17:58.260 [INFO][4542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c" Apr 21 10:17:58.277765 systemd[1]: run-netns-cni\x2dd3ad736a\x2d405d\x2dde13\x2d5e7b\x2d45a53fb4ac24.mount: Deactivated successfully. Apr 21 10:17:58.277976 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c-shm.mount: Deactivated successfully. Apr 21 10:17:58.280575 containerd[2109]: time="2026-04-21T10:17:58.280522793Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58fc5fcf6d-w6pl7,Uid:97d14944-e822-477b-a225-60b22c64b8f0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.303486 containerd[2109]: time="2026-04-21T10:17:58.303424167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-bnvcp,Uid:c4b99c43-504c-45a9-acca-981cff89876f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.318132 kubelet[3556]: E0421 10:17:58.317492 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.319793 kubelet[3556]: E0421 10:17:58.319084 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.321699 kubelet[3556]: E0421 10:17:58.319511 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84d9dbc967-bnvcp" Apr 21 10:17:58.321699 kubelet[3556]: E0421 10:17:58.321399 3556 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84d9dbc967-bnvcp" Apr 21 10:17:58.321699 kubelet[3556]: E0421 10:17:58.321528 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84d9dbc967-bnvcp_calico-system(c4b99c43-504c-45a9-acca-981cff89876f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84d9dbc967-bnvcp_calico-system(c4b99c43-504c-45a9-acca-981cff89876f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cb9c9c551242dcc79641086f8a01a7ebd94bb423283687a5d5c84794a0c525b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84d9dbc967-bnvcp" podUID="c4b99c43-504c-45a9-acca-981cff89876f" Apr 21 10:17:58.323273 kubelet[3556]: E0421 10:17:58.319174 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d61e6f6fa45cfae29e054d66823c426a968c99b20d1ed8c7758af00132fd707c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58fc5fcf6d-w6pl7" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:57.783 [INFO][4531] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:57.788 [INFO][4531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" iface="eth0" netns="/var/run/netns/cni-1341bac0-8a29-fd8c-294c-c0ba01cd9955" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:57.788 [INFO][4531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" iface="eth0" netns="/var/run/netns/cni-1341bac0-8a29-fd8c-294c-c0ba01cd9955" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:57.791 [INFO][4531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" iface="eth0" netns="/var/run/netns/cni-1341bac0-8a29-fd8c-294c-c0ba01cd9955" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:57.791 [INFO][4531] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:57.791 [INFO][4531] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.232 [INFO][4628] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" HandleID="k8s-pod-network.43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.233 [INFO][4628] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.340 [INFO][4628] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.350 [WARNING][4628] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" HandleID="k8s-pod-network.43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.350 [INFO][4628] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" HandleID="k8s-pod-network.43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.353 [INFO][4628] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.361195 containerd[2109]: 2026-04-21 10:17:58.358 [INFO][4531] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3" Apr 21 10:17:58.369106 containerd[2109]: time="2026-04-21T10:17:58.367081734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lpv22,Uid:3fc0e5e1-29eb-4eba-bbc3-f696b0a92007,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.369248 kubelet[3556]: E0421 10:17:58.367358 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.369248 kubelet[3556]: E0421 10:17:58.367422 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lpv22" Apr 21 10:17:58.369248 kubelet[3556]: E0421 10:17:58.367456 3556 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-lpv22" Apr 21 10:17:58.369405 kubelet[3556]: E0421 10:17:58.367515 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-lpv22_kube-system(3fc0e5e1-29eb-4eba-bbc3-f696b0a92007)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-lpv22_kube-system(3fc0e5e1-29eb-4eba-bbc3-f696b0a92007)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-lpv22" podUID="3fc0e5e1-29eb-4eba-bbc3-f696b0a92007" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:57.761 [INFO][4519] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:57.761 [INFO][4519] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" iface="eth0" netns="/var/run/netns/cni-aac0faae-a597-d845-fed7-fa1b16f25429" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:57.761 [INFO][4519] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" iface="eth0" netns="/var/run/netns/cni-aac0faae-a597-d845-fed7-fa1b16f25429" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:57.763 [INFO][4519] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" iface="eth0" netns="/var/run/netns/cni-aac0faae-a597-d845-fed7-fa1b16f25429" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:57.763 [INFO][4519] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:57.763 [INFO][4519] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.235 [INFO][4612] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" HandleID="k8s-pod-network.5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Workload="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.236 [INFO][4612] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.353 [INFO][4612] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.363 [WARNING][4612] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" HandleID="k8s-pod-network.5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Workload="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.363 [INFO][4612] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" HandleID="k8s-pod-network.5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Workload="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.365 [INFO][4612] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.378177 containerd[2109]: 2026-04-21 10:17:58.371 [INFO][4519] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5" Apr 21 10:17:58.389922 systemd-networkd[1659]: cali831be638f86: Link UP Apr 21 10:17:58.391374 systemd-networkd[1659]: cali831be638f86: Gained carrier Apr 21 10:17:58.392037 containerd[2109]: time="2026-04-21T10:17:58.391739107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6d597596-vzm6n,Uid:022e9cdc-d1df-4cf1-836a-2007c1cb8d2f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.392751 kubelet[3556]: E0421 10:17:58.392202 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.393066 kubelet[3556]: E0421 10:17:58.392881 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f6d597596-vzm6n" Apr 21 10:17:58.393360 kubelet[3556]: E0421 10:17:58.393331 3556 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f6d597596-vzm6n" Apr 21 10:17:58.393561 kubelet[3556]: E0421 10:17:58.393512 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f6d597596-vzm6n_calico-system(022e9cdc-d1df-4cf1-836a-2007c1cb8d2f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f6d597596-vzm6n_calico-system(022e9cdc-d1df-4cf1-836a-2007c1cb8d2f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f6d597596-vzm6n" podUID="022e9cdc-d1df-4cf1-836a-2007c1cb8d2f" Apr 21 10:17:58.403586 (udev-worker)[4702]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:57.727 [INFO][4529] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:57.727 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" iface="eth0" netns="/var/run/netns/cni-8dfb5612-f112-e25b-3d6f-bca3c84472b0" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:57.728 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" iface="eth0" netns="/var/run/netns/cni-8dfb5612-f112-e25b-3d6f-bca3c84472b0" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:57.733 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" iface="eth0" netns="/var/run/netns/cni-8dfb5612-f112-e25b-3d6f-bca3c84472b0" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:57.733 [INFO][4529] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:57.733 [INFO][4529] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.237 [INFO][4598] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" HandleID="k8s-pod-network.6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Workload="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.237 [INFO][4598] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.365 [INFO][4598] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.378 [WARNING][4598] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" HandleID="k8s-pod-network.6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Workload="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.379 [INFO][4598] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" HandleID="k8s-pod-network.6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Workload="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.389 [INFO][4598] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.410162 containerd[2109]: 2026-04-21 10:17:58.399 [INFO][4529] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7" Apr 21 10:17:58.415563 containerd[2109]: time="2026-04-21T10:17:58.415428023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9f7nn,Uid:befe1eda-78f2-4643-854f-76cc3bc600cc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.418002 kubelet[3556]: E0421 10:17:58.415770 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.418002 kubelet[3556]: E0421 10:17:58.415838 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-9f7nn" Apr 21 10:17:58.418002 kubelet[3556]: E0421 10:17:58.415870 3556 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-9f7nn" Apr 21 10:17:58.418488 kubelet[3556]: E0421 10:17:58.415934 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-9f7nn_calico-system(befe1eda-78f2-4643-854f-76cc3bc600cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-9f7nn_calico-system(befe1eda-78f2-4643-854f-76cc3bc600cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-9f7nn" podUID="befe1eda-78f2-4643-854f-76cc3bc600cc" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:57.975 [ERROR][4602] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.022 [INFO][4602] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0 csi-node-driver- calico-system 78edf098-c99a-45bc-bf91-cfbe789bd2f5 704 0 2026-04-21 10:17:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-26 csi-node-driver-rkknl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali831be638f86 [] [] }} ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.025 [INFO][4602] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.200 [INFO][4648] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" HandleID="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Workload="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.226 [INFO][4648] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" HandleID="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Workload="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004edd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-26", "pod":"csi-node-driver-rkknl", "timestamp":"2026-04-21 10:17:58.200251269 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004ceb00)} Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.226 [INFO][4648] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.249 [INFO][4648] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.249 [INFO][4648] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.262 [INFO][4648] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.282 [INFO][4648] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.308 [INFO][4648] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.313 [INFO][4648] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.318 [INFO][4648] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.319 [INFO][4648] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.324 [INFO][4648] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.332 [INFO][4648] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.340 [INFO][4648] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.193/26] block=192.168.37.192/26 handle="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.340 [INFO][4648] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.193/26] handle="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" host="ip-172-31-28-26" Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.340 [INFO][4648] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.435106 containerd[2109]: 2026-04-21 10:17:58.341 [INFO][4648] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.193/26] IPv6=[] ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" HandleID="k8s-pod-network.fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Workload="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.436111 containerd[2109]: 2026-04-21 10:17:58.347 [INFO][4602] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78edf098-c99a-45bc-bf91-cfbe789bd2f5", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"csi-node-driver-rkknl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali831be638f86", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:58.436111 containerd[2109]: 2026-04-21 10:17:58.347 [INFO][4602] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.193/32] ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.436111 containerd[2109]: 2026-04-21 10:17:58.348 [INFO][4602] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali831be638f86 ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.436111 containerd[2109]: 2026-04-21 10:17:58.391 [INFO][4602] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.436111 containerd[2109]: 2026-04-21 10:17:58.395 [INFO][4602] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"78edf098-c99a-45bc-bf91-cfbe789bd2f5", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a", Pod:"csi-node-driver-rkknl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.37.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali831be638f86", MAC:"7e:53:6a:7c:99:cb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:17:58.436111 containerd[2109]: 2026-04-21 10:17:58.423 [INFO][4602] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a" Namespace="calico-system" Pod="csi-node-driver-rkknl" WorkloadEndpoint="ip--172--31--28--26-k8s-csi--node--driver--rkknl-eth0" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:57.758 [INFO][4525] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:57.758 [INFO][4525] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" iface="eth0" netns="/var/run/netns/cni-074b4ed3-5469-130d-3999-63e88c844f2b" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:57.759 [INFO][4525] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" iface="eth0" netns="/var/run/netns/cni-074b4ed3-5469-130d-3999-63e88c844f2b" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:57.765 [INFO][4525] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" iface="eth0" netns="/var/run/netns/cni-074b4ed3-5469-130d-3999-63e88c844f2b" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:57.765 [INFO][4525] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:57.765 [INFO][4525] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.239 [INFO][4611] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" HandleID="k8s-pod-network.7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.240 [INFO][4611] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.391 [INFO][4611] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.417 [WARNING][4611] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" HandleID="k8s-pod-network.7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.417 [INFO][4611] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" HandleID="k8s-pod-network.7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.426 [INFO][4611] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.437282 containerd[2109]: 2026-04-21 10:17:58.433 [INFO][4525] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:57.725 [INFO][4522] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:57.727 [INFO][4522] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" iface="eth0" netns="/var/run/netns/cni-d238de8a-e406-fd4c-8c2f-206462f71054" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:57.730 [INFO][4522] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" iface="eth0" netns="/var/run/netns/cni-d238de8a-e406-fd4c-8c2f-206462f71054" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:57.731 [INFO][4522] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" iface="eth0" netns="/var/run/netns/cni-d238de8a-e406-fd4c-8c2f-206462f71054" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:57.732 [INFO][4522] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:57.732 [INFO][4522] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.259 [INFO][4597] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" HandleID="k8s-pod-network.e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.259 [INFO][4597] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.427 [INFO][4597] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.446 [WARNING][4597] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" HandleID="k8s-pod-network.e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.446 [INFO][4597] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" HandleID="k8s-pod-network.e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.455 [INFO][4597] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:17:58.476078 containerd[2109]: 2026-04-21 10:17:58.461 [INFO][4522] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba" Apr 21 10:17:58.486575 containerd[2109]: time="2026-04-21T10:17:58.486217962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n6ppd,Uid:9a490155-4011-4010-b8d7-bb01de1814bf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.486718 kubelet[3556]: E0421 10:17:58.486501 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.486718 kubelet[3556]: E0421 10:17:58.486572 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n6ppd" Apr 21 10:17:58.486718 kubelet[3556]: E0421 10:17:58.486598 3556 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-n6ppd" Apr 21 10:17:58.486955 kubelet[3556]: E0421 10:17:58.486655 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-n6ppd_kube-system(9a490155-4011-4010-b8d7-bb01de1814bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-n6ppd_kube-system(9a490155-4011-4010-b8d7-bb01de1814bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-n6ppd" podUID="9a490155-4011-4010-b8d7-bb01de1814bf" Apr 21 10:17:58.489541 containerd[2109]: time="2026-04-21T10:17:58.489417884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-x8phj,Uid:b336af44-2e6f-48b3-8a64-c248629bc9bc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.489898 kubelet[3556]: E0421 10:17:58.489667 3556 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:17:58.489898 kubelet[3556]: E0421 10:17:58.489730 3556 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84d9dbc967-x8phj" Apr 21 10:17:58.489898 kubelet[3556]: E0421 10:17:58.489759 3556 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84d9dbc967-x8phj" Apr 21 10:17:58.490587 kubelet[3556]: E0421 10:17:58.489819 3556 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84d9dbc967-x8phj_calico-system(b336af44-2e6f-48b3-8a64-c248629bc9bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84d9dbc967-x8phj_calico-system(b336af44-2e6f-48b3-8a64-c248629bc9bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84d9dbc967-x8phj" podUID="b336af44-2e6f-48b3-8a64-c248629bc9bc" Apr 21 10:17:58.501340 containerd[2109]: time="2026-04-21T10:17:58.501141060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:17:58.501768 containerd[2109]: time="2026-04-21T10:17:58.501613168Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:17:58.502143 containerd[2109]: time="2026-04-21T10:17:58.501751857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:58.502143 containerd[2109]: time="2026-04-21T10:17:58.502000498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:17:58.543706 containerd[2109]: time="2026-04-21T10:17:58.543659734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rkknl,Uid:78edf098-c99a-45bc-bf91-cfbe789bd2f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a\"" Apr 21 10:17:58.550882 containerd[2109]: time="2026-04-21T10:17:58.550838788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:17:59.023289 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:17:59.023323 systemd-resolved[1988]: Flushed all caches. Apr 21 10:17:59.025058 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:17:59.041129 containerd[2109]: time="2026-04-21T10:17:59.038132308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9f7nn,Uid:befe1eda-78f2-4643-854f-76cc3bc600cc,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:59.044562 containerd[2109]: time="2026-04-21T10:17:59.043663754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n6ppd,Uid:9a490155-4011-4010-b8d7-bb01de1814bf,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:59.066757 containerd[2109]: time="2026-04-21T10:17:59.066326693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-bnvcp,Uid:c4b99c43-504c-45a9-acca-981cff89876f,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:59.066757 containerd[2109]: time="2026-04-21T10:17:59.066579833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lpv22,Uid:3fc0e5e1-29eb-4eba-bbc3-f696b0a92007,Namespace:kube-system,Attempt:0,}" Apr 21 10:17:59.066757 containerd[2109]: time="2026-04-21T10:17:59.066624947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6d597596-vzm6n,Uid:022e9cdc-d1df-4cf1-836a-2007c1cb8d2f,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:59.067077 containerd[2109]: time="2026-04-21T10:17:59.067044024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-x8phj,Uid:b336af44-2e6f-48b3-8a64-c248629bc9bc,Namespace:calico-system,Attempt:0,}" Apr 21 10:17:59.180199 kubelet[3556]: I0421 10:17:59.173253 3556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-whisker-ca-bundle\") pod \"97d14944-e822-477b-a225-60b22c64b8f0\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " Apr 21 10:17:59.180453 kubelet[3556]: I0421 10:17:59.180422 3556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-nginx-config\") pod \"97d14944-e822-477b-a225-60b22c64b8f0\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " Apr 21 10:17:59.180646 kubelet[3556]: I0421 10:17:59.180603 3556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hfgmw\" (UniqueName: \"kubernetes.io/projected/97d14944-e822-477b-a225-60b22c64b8f0-kube-api-access-hfgmw\") pod \"97d14944-e822-477b-a225-60b22c64b8f0\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " Apr 21 10:17:59.184533 kubelet[3556]: I0421 10:17:59.183411 3556 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97d14944-e822-477b-a225-60b22c64b8f0-whisker-backend-key-pair\") pod \"97d14944-e822-477b-a225-60b22c64b8f0\" (UID: \"97d14944-e822-477b-a225-60b22c64b8f0\") " Apr 21 10:17:59.196138 kubelet[3556]: I0421 10:17:59.196083 3556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "97d14944-e822-477b-a225-60b22c64b8f0" (UID: "97d14944-e822-477b-a225-60b22c64b8f0"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:17:59.208421 kubelet[3556]: I0421 10:17:59.191129 3556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "97d14944-e822-477b-a225-60b22c64b8f0" (UID: "97d14944-e822-477b-a225-60b22c64b8f0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:17:59.211742 systemd[1]: run-netns-cni\x2daac0faae\x2da597\x2dd845\x2dfed7\x2dfa1b16f25429.mount: Deactivated successfully. Apr 21 10:17:59.211947 systemd[1]: run-netns-cni\x2dd238de8a\x2de406\x2dfd4c\x2d8c2f\x2d206462f71054.mount: Deactivated successfully. Apr 21 10:17:59.213441 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e40e99c35f918d2695d07a141fb854aef65a3a509d5eacaa75d0ae5c112bcaba-shm.mount: Deactivated successfully. Apr 21 10:17:59.213611 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e87dd3fbab88c38b94c97b957ce215dceb0e8f0c219a33dc66a64e85d33c4e5-shm.mount: Deactivated successfully. Apr 21 10:17:59.213770 systemd[1]: run-netns-cni\x2d074b4ed3\x2d5469\x2d130d\x2d3999\x2d63e88c844f2b.mount: Deactivated successfully. Apr 21 10:17:59.213907 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7eba180969ab52127fbbd2ab89847d3cd353d3d80a10e9494c8b9f6bfeae270f-shm.mount: Deactivated successfully. Apr 21 10:17:59.214081 systemd[1]: run-netns-cni\x2d8dfb5612\x2df112\x2de25b\x2d3d6f\x2dbca3c84472b0.mount: Deactivated successfully. Apr 21 10:17:59.214222 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6e9494f536897477806694f8f6b10554dca5e5c629e285d19cff1dab71c7ccd7-shm.mount: Deactivated successfully. Apr 21 10:17:59.214366 systemd[1]: run-netns-cni\x2d1341bac0\x2d8a29\x2dfd8c\x2d294c\x2dc0ba01cd9955.mount: Deactivated successfully. Apr 21 10:17:59.214488 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-43ca005b845611afc8c9d1d7b7ec77c466ab3aeaa2a7075d53070eef145f03c3-shm.mount: Deactivated successfully. Apr 21 10:17:59.241545 kubelet[3556]: I0421 10:17:59.241410 3556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97d14944-e822-477b-a225-60b22c64b8f0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "97d14944-e822-477b-a225-60b22c64b8f0" (UID: "97d14944-e822-477b-a225-60b22c64b8f0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:17:59.245046 kubelet[3556]: I0421 10:17:59.244936 3556 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d14944-e822-477b-a225-60b22c64b8f0-kube-api-access-hfgmw" (OuterVolumeSpecName: "kube-api-access-hfgmw") pod "97d14944-e822-477b-a225-60b22c64b8f0" (UID: "97d14944-e822-477b-a225-60b22c64b8f0"). InnerVolumeSpecName "kube-api-access-hfgmw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:17:59.247576 systemd[1]: var-lib-kubelet-pods-97d14944\x2de822\x2d477b\x2da225\x2d60b22c64b8f0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhfgmw.mount: Deactivated successfully. Apr 21 10:17:59.247842 systemd[1]: var-lib-kubelet-pods-97d14944\x2de822\x2d477b\x2da225\x2d60b22c64b8f0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:17:59.286054 kubelet[3556]: I0421 10:17:59.285867 3556 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/97d14944-e822-477b-a225-60b22c64b8f0-whisker-backend-key-pair\") on node \"ip-172-31-28-26\" DevicePath \"\"" Apr 21 10:17:59.286054 kubelet[3556]: I0421 10:17:59.285903 3556 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-whisker-ca-bundle\") on node \"ip-172-31-28-26\" DevicePath \"\"" Apr 21 10:17:59.286054 kubelet[3556]: I0421 10:17:59.285917 3556 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/97d14944-e822-477b-a225-60b22c64b8f0-nginx-config\") on node \"ip-172-31-28-26\" DevicePath \"\"" Apr 21 10:17:59.286054 kubelet[3556]: I0421 10:17:59.285932 3556 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hfgmw\" (UniqueName: \"kubernetes.io/projected/97d14944-e822-477b-a225-60b22c64b8f0-kube-api-access-hfgmw\") on node \"ip-172-31-28-26\" DevicePath \"\"" Apr 21 10:17:59.995128 systemd-networkd[1659]: califd8d989dd2d: Link UP Apr 21 10:17:59.998636 systemd-networkd[1659]: califd8d989dd2d: Gained carrier Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.354 [ERROR][4768] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.424 [INFO][4768] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0 coredns-674b8bbfcf- kube-system 9a490155-4011-4010-b8d7-bb01de1814bf 872 0 2026-04-21 10:17:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-26 coredns-674b8bbfcf-n6ppd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califd8d989dd2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.427 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.782 [INFO][4923] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" HandleID="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.857 [INFO][4923] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" HandleID="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103720), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-26", "pod":"coredns-674b8bbfcf-n6ppd", "timestamp":"2026-04-21 10:17:59.782747327 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004529a0)} Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.857 [INFO][4923] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.857 [INFO][4923] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.857 [INFO][4923] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.863 [INFO][4923] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.901 [INFO][4923] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.924 [INFO][4923] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.932 [INFO][4923] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.940 [INFO][4923] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.940 [INFO][4923] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.942 [INFO][4923] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14 Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.954 [INFO][4923] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.967 [INFO][4923] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.194/26] block=192.168.37.192/26 handle="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.967 [INFO][4923] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.194/26] handle="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" host="ip-172-31-28-26" Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.969 [INFO][4923] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:00.052384 containerd[2109]: 2026-04-21 10:17:59.971 [INFO][4923] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.194/26] IPv6=[] ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" HandleID="k8s-pod-network.7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.054550 containerd[2109]: 2026-04-21 10:17:59.982 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a490155-4011-4010-b8d7-bb01de1814bf", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"coredns-674b8bbfcf-n6ppd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd8d989dd2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.054550 containerd[2109]: 2026-04-21 10:17:59.983 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.194/32] ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.054550 containerd[2109]: 2026-04-21 10:17:59.983 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd8d989dd2d ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.054550 containerd[2109]: 2026-04-21 10:17:59.999 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.054550 containerd[2109]: 2026-04-21 10:17:59.999 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9a490155-4011-4010-b8d7-bb01de1814bf", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14", Pod:"coredns-674b8bbfcf-n6ppd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califd8d989dd2d", MAC:"2a:0e:46:14:d3:34", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.054550 containerd[2109]: 2026-04-21 10:18:00.022 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14" Namespace="kube-system" Pod="coredns-674b8bbfcf-n6ppd" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--n6ppd-eth0" Apr 21 10:18:00.193728 systemd-networkd[1659]: calidcc8147b936: Link UP Apr 21 10:18:00.194612 systemd-networkd[1659]: calidcc8147b936: Gained carrier Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.443 [ERROR][4801] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.507 [INFO][4801] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0 coredns-674b8bbfcf- kube-system 3fc0e5e1-29eb-4eba-bbc3-f696b0a92007 876 0 2026-04-21 10:17:22 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-26 coredns-674b8bbfcf-lpv22 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcc8147b936 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.507 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.841 [INFO][4942] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" HandleID="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.857 [INFO][4942] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" HandleID="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001226c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-26", "pod":"coredns-674b8bbfcf-lpv22", "timestamp":"2026-04-21 10:17:59.841895161 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00038c840)} Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.857 [INFO][4942] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.968 [INFO][4942] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.970 [INFO][4942] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:17:59.977 [INFO][4942] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.019 [INFO][4942] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.089 [INFO][4942] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.101 [INFO][4942] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.111 [INFO][4942] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.111 [INFO][4942] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.128 [INFO][4942] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8 Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.143 [INFO][4942] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.161 [INFO][4942] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.195/26] block=192.168.37.192/26 handle="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.161 [INFO][4942] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.195/26] handle="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" host="ip-172-31-28-26" Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.161 [INFO][4942] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:00.239606 containerd[2109]: 2026-04-21 10:18:00.161 [INFO][4942] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.195/26] IPv6=[] ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" HandleID="k8s-pod-network.9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Workload="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.244814 containerd[2109]: 2026-04-21 10:18:00.184 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3fc0e5e1-29eb-4eba-bbc3-f696b0a92007", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"coredns-674b8bbfcf-lpv22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcc8147b936", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.244814 containerd[2109]: 2026-04-21 10:18:00.190 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.195/32] ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.244814 containerd[2109]: 2026-04-21 10:18:00.190 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcc8147b936 ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.244814 containerd[2109]: 2026-04-21 10:18:00.195 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.244814 containerd[2109]: 2026-04-21 10:18:00.195 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3fc0e5e1-29eb-4eba-bbc3-f696b0a92007", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8", Pod:"coredns-674b8bbfcf-lpv22", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.37.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcc8147b936", MAC:"2e:ce:d2:8d:dd:cd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.244814 containerd[2109]: 2026-04-21 10:18:00.230 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8" Namespace="kube-system" Pod="coredns-674b8bbfcf-lpv22" WorkloadEndpoint="ip--172--31--28--26-k8s-coredns--674b8bbfcf--lpv22-eth0" Apr 21 10:18:00.256374 systemd-networkd[1659]: cali831be638f86: Gained IPv6LL Apr 21 10:18:00.431154 systemd-networkd[1659]: cali7137893418d: Link UP Apr 21 10:18:00.433802 kubelet[3556]: I0421 10:18:00.433192 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/610088b2-b604-4950-a2ad-d6a215850163-whisker-ca-bundle\") pod \"whisker-68f89b8cdf-zct9n\" (UID: \"610088b2-b604-4950-a2ad-d6a215850163\") " pod="calico-system/whisker-68f89b8cdf-zct9n" Apr 21 10:18:00.433802 kubelet[3556]: I0421 10:18:00.433268 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ljjq\" (UniqueName: \"kubernetes.io/projected/610088b2-b604-4950-a2ad-d6a215850163-kube-api-access-9ljjq\") pod \"whisker-68f89b8cdf-zct9n\" (UID: \"610088b2-b604-4950-a2ad-d6a215850163\") " pod="calico-system/whisker-68f89b8cdf-zct9n" Apr 21 10:18:00.433802 kubelet[3556]: I0421 10:18:00.433297 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/610088b2-b604-4950-a2ad-d6a215850163-whisker-backend-key-pair\") pod \"whisker-68f89b8cdf-zct9n\" (UID: \"610088b2-b604-4950-a2ad-d6a215850163\") " pod="calico-system/whisker-68f89b8cdf-zct9n" Apr 21 10:18:00.433802 kubelet[3556]: I0421 10:18:00.433331 3556 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/610088b2-b604-4950-a2ad-d6a215850163-nginx-config\") pod \"whisker-68f89b8cdf-zct9n\" (UID: \"610088b2-b604-4950-a2ad-d6a215850163\") " pod="calico-system/whisker-68f89b8cdf-zct9n" Apr 21 10:18:00.461359 systemd-networkd[1659]: cali7137893418d: Gained carrier Apr 21 10:18:00.481664 containerd[2109]: time="2026-04-21T10:18:00.478413612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:00.481664 containerd[2109]: time="2026-04-21T10:18:00.478514568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:00.481664 containerd[2109]: time="2026-04-21T10:18:00.478536427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:00.481664 containerd[2109]: time="2026-04-21T10:18:00.478657319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:17:59.383 [ERROR][4781] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:17:59.432 [INFO][4781] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0 goldmane-5b85766d88- calico-system befe1eda-78f2-4643-854f-76cc3bc600cc 870 0 2026-04-21 10:17:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-26 goldmane-5b85766d88-9f7nn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7137893418d [] [] }} ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:17:59.433 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:17:59.797 [INFO][4924] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" HandleID="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Workload="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:17:59.861 [INFO][4924] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" HandleID="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Workload="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00061e360), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-26", "pod":"goldmane-5b85766d88-9f7nn", "timestamp":"2026-04-21 10:17:59.797152779 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000309340)} Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:17:59.861 [INFO][4924] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.162 [INFO][4924] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.163 [INFO][4924] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.174 [INFO][4924] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.235 [INFO][4924] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.291 [INFO][4924] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.303 [INFO][4924] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.312 [INFO][4924] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.312 [INFO][4924] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.337 [INFO][4924] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33 Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.364 [INFO][4924] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.390 [INFO][4924] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.196/26] block=192.168.37.192/26 handle="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.392 [INFO][4924] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.196/26] handle="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" host="ip-172-31-28-26" Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.392 [INFO][4924] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:00.508394 containerd[2109]: 2026-04-21 10:18:00.392 [INFO][4924] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.196/26] IPv6=[] ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" HandleID="k8s-pod-network.f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Workload="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.509501 containerd[2109]: 2026-04-21 10:18:00.406 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"befe1eda-78f2-4643-854f-76cc3bc600cc", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"goldmane-5b85766d88-9f7nn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7137893418d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.509501 containerd[2109]: 2026-04-21 10:18:00.406 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.196/32] ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.509501 containerd[2109]: 2026-04-21 10:18:00.406 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7137893418d ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.509501 containerd[2109]: 2026-04-21 10:18:00.464 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.509501 containerd[2109]: 2026-04-21 10:18:00.466 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"befe1eda-78f2-4643-854f-76cc3bc600cc", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33", Pod:"goldmane-5b85766d88-9f7nn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.37.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7137893418d", MAC:"fa:d3:d5:90:cd:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.509501 containerd[2109]: 2026-04-21 10:18:00.498 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33" Namespace="calico-system" Pod="goldmane-5b85766d88-9f7nn" WorkloadEndpoint="ip--172--31--28--26-k8s-goldmane--5b85766d88--9f7nn-eth0" Apr 21 10:18:00.595585 systemd-networkd[1659]: cali4779a550c6f: Link UP Apr 21 10:18:00.603246 systemd-networkd[1659]: cali4779a550c6f: Gained carrier Apr 21 10:18:00.698060 containerd[2109]: time="2026-04-21T10:18:00.615891526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:00.698060 containerd[2109]: time="2026-04-21T10:18:00.615960301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:00.698060 containerd[2109]: time="2026-04-21T10:18:00.615977409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:00.698060 containerd[2109]: time="2026-04-21T10:18:00.628797638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:17:59.442 [ERROR][4790] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:17:59.506 [INFO][4790] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0 calico-apiserver-84d9dbc967- calico-system b336af44-2e6f-48b3-8a64-c248629bc9bc 873 0 2026-04-21 10:17:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84d9dbc967 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-26 calico-apiserver-84d9dbc967-x8phj eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali4779a550c6f [] [] }} ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:17:59.506 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:17:59.893 [INFO][4940] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" HandleID="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:17:59.941 [INFO][4940] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" HandleID="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000457440), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-26", "pod":"calico-apiserver-84d9dbc967-x8phj", "timestamp":"2026-04-21 10:17:59.892989087 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00041e160)} Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:17:59.941 [INFO][4940] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.393 [INFO][4940] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.393 [INFO][4940] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.399 [INFO][4940] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.433 [INFO][4940] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.446 [INFO][4940] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.463 [INFO][4940] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.472 [INFO][4940] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.472 [INFO][4940] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.482 [INFO][4940] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54 Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.494 [INFO][4940] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.524 [INFO][4940] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.197/26] block=192.168.37.192/26 handle="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.525 [INFO][4940] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.197/26] handle="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" host="ip-172-31-28-26" Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.525 [INFO][4940] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:00.714349 containerd[2109]: 2026-04-21 10:18:00.525 [INFO][4940] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.197/26] IPv6=[] ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" HandleID="k8s-pod-network.6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.716601 containerd[2109]: 2026-04-21 10:18:00.541 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0", GenerateName:"calico-apiserver-84d9dbc967-", Namespace:"calico-system", SelfLink:"", UID:"b336af44-2e6f-48b3-8a64-c248629bc9bc", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d9dbc967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"calico-apiserver-84d9dbc967-x8phj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4779a550c6f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.716601 containerd[2109]: 2026-04-21 10:18:00.553 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.197/32] ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.716601 containerd[2109]: 2026-04-21 10:18:00.558 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4779a550c6f ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.716601 containerd[2109]: 2026-04-21 10:18:00.637 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.716601 containerd[2109]: 2026-04-21 10:18:00.643 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0", GenerateName:"calico-apiserver-84d9dbc967-", Namespace:"calico-system", SelfLink:"", UID:"b336af44-2e6f-48b3-8a64-c248629bc9bc", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d9dbc967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54", Pod:"calico-apiserver-84d9dbc967-x8phj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali4779a550c6f", MAC:"9a:d8:35:64:e8:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.716601 containerd[2109]: 2026-04-21 10:18:00.690 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-x8phj" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--x8phj-eth0" Apr 21 10:18:00.729619 containerd[2109]: time="2026-04-21T10:18:00.729563682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f89b8cdf-zct9n,Uid:610088b2-b604-4950-a2ad-d6a215850163,Namespace:calico-system,Attempt:0,}" Apr 21 10:18:00.754492 kubelet[3556]: I0421 10:18:00.754450 3556 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d14944-e822-477b-a225-60b22c64b8f0" path="/var/lib/kubelet/pods/97d14944-e822-477b-a225-60b22c64b8f0/volumes" Apr 21 10:18:00.789164 systemd-networkd[1659]: cali8f86cc941cb: Link UP Apr 21 10:18:00.800678 systemd-networkd[1659]: cali8f86cc941cb: Gained carrier Apr 21 10:18:00.888639 containerd[2109]: time="2026-04-21T10:18:00.878116919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:00.888639 containerd[2109]: time="2026-04-21T10:18:00.878222397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:00.888639 containerd[2109]: time="2026-04-21T10:18:00.878247473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:00.888639 containerd[2109]: time="2026-04-21T10:18:00.878363205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:17:59.520 [ERROR][4806] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:17:59.576 [INFO][4806] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0 calico-kube-controllers-5f6d597596- calico-system 022e9cdc-d1df-4cf1-836a-2007c1cb8d2f 874 0 2026-04-21 10:17:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f6d597596 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-26 calico-kube-controllers-5f6d597596-vzm6n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8f86cc941cb [] [] }} ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:17:59.578 [INFO][4806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:17:59.979 [INFO][4957] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" HandleID="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Workload="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.022 [INFO][4957] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" HandleID="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Workload="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d6f40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-26", "pod":"calico-kube-controllers-5f6d597596-vzm6n", "timestamp":"2026-04-21 10:17:59.979397052 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f66e0)} Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.025 [INFO][4957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.525 [INFO][4957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.525 [INFO][4957] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.533 [INFO][4957] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.598 [INFO][4957] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.649 [INFO][4957] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.654 [INFO][4957] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.665 [INFO][4957] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.665 [INFO][4957] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.675 [INFO][4957] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09 Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.690 [INFO][4957] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.705 [INFO][4957] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.198/26] block=192.168.37.192/26 handle="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.706 [INFO][4957] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.198/26] handle="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" host="ip-172-31-28-26" Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.706 [INFO][4957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:00.894096 containerd[2109]: 2026-04-21 10:18:00.706 [INFO][4957] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.198/26] IPv6=[] ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" HandleID="k8s-pod-network.7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Workload="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.897951 containerd[2109]: 2026-04-21 10:18:00.754 [INFO][4806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0", GenerateName:"calico-kube-controllers-5f6d597596-", Namespace:"calico-system", SelfLink:"", UID:"022e9cdc-d1df-4cf1-836a-2007c1cb8d2f", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f6d597596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"calico-kube-controllers-5f6d597596-vzm6n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f86cc941cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.897951 containerd[2109]: 2026-04-21 10:18:00.754 [INFO][4806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.198/32] ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.897951 containerd[2109]: 2026-04-21 10:18:00.755 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f86cc941cb ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.897951 containerd[2109]: 2026-04-21 10:18:00.806 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.897951 containerd[2109]: 2026-04-21 10:18:00.806 [INFO][4806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0", GenerateName:"calico-kube-controllers-5f6d597596-", Namespace:"calico-system", SelfLink:"", UID:"022e9cdc-d1df-4cf1-836a-2007c1cb8d2f", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f6d597596", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09", Pod:"calico-kube-controllers-5f6d597596-vzm6n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.37.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f86cc941cb", MAC:"8a:8f:ce:5e:20:63", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:00.897951 containerd[2109]: 2026-04-21 10:18:00.866 [INFO][4806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09" Namespace="calico-system" Pod="calico-kube-controllers-5f6d597596-vzm6n" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--kube--controllers--5f6d597596--vzm6n-eth0" Apr 21 10:18:00.965278 systemd-networkd[1659]: cali5bd490b7133: Link UP Apr 21 10:18:00.981648 systemd-networkd[1659]: cali5bd490b7133: Gained carrier Apr 21 10:18:00.992088 kernel: calico-node[4994]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:18:01.078041 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:01.074172 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:01.074517 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:01.128115 containerd[2109]: time="2026-04-21T10:18:01.109176943Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:01.128115 containerd[2109]: time="2026-04-21T10:18:01.109261407Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:01.128115 containerd[2109]: time="2026-04-21T10:18:01.109281221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:01.128115 containerd[2109]: time="2026-04-21T10:18:01.109402639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:01.200490 systemd-networkd[1659]: califd8d989dd2d: Gained IPv6LL Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:17:59.573 [ERROR][4815] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:17:59.653 [INFO][4815] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0 calico-apiserver-84d9dbc967- calico-system c4b99c43-504c-45a9-acca-981cff89876f 871 0 2026-04-21 10:17:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84d9dbc967 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-26 calico-apiserver-84d9dbc967-bnvcp eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5bd490b7133 [] [] }} ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:17:59.653 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.010 [INFO][4963] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" HandleID="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.044 [INFO][4963] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" HandleID="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039b9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-26", "pod":"calico-apiserver-84d9dbc967-bnvcp", "timestamp":"2026-04-21 10:18:00.010753988 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000501a20)} Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.044 [INFO][4963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.706 [INFO][4963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.706 [INFO][4963] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.721 [INFO][4963] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.742 [INFO][4963] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.783 [INFO][4963] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.816 [INFO][4963] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.832 [INFO][4963] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.845 [INFO][4963] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.853 [INFO][4963] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.894 [INFO][4963] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.921 [INFO][4963] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.199/26] block=192.168.37.192/26 handle="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.921 [INFO][4963] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.199/26] handle="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" host="ip-172-31-28-26" Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.921 [INFO][4963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:01.263572 containerd[2109]: 2026-04-21 10:18:00.921 [INFO][4963] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.199/26] IPv6=[] ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" HandleID="k8s-pod-network.070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Workload="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.267338 containerd[2109]: 2026-04-21 10:18:00.945 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0", GenerateName:"calico-apiserver-84d9dbc967-", Namespace:"calico-system", SelfLink:"", UID:"c4b99c43-504c-45a9-acca-981cff89876f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d9dbc967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"calico-apiserver-84d9dbc967-bnvcp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5bd490b7133", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:01.267338 containerd[2109]: 2026-04-21 10:18:00.946 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.199/32] ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.267338 containerd[2109]: 2026-04-21 10:18:00.946 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5bd490b7133 ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.267338 containerd[2109]: 2026-04-21 10:18:00.988 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.267338 containerd[2109]: 2026-04-21 10:18:01.001 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0", GenerateName:"calico-apiserver-84d9dbc967-", Namespace:"calico-system", SelfLink:"", UID:"c4b99c43-504c-45a9-acca-981cff89876f", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 17, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84d9dbc967", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a", Pod:"calico-apiserver-84d9dbc967-bnvcp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.37.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5bd490b7133", MAC:"7e:d1:ed:31:0e:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:01.267338 containerd[2109]: 2026-04-21 10:18:01.025 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a" Namespace="calico-system" Pod="calico-apiserver-84d9dbc967-bnvcp" WorkloadEndpoint="ip--172--31--28--26-k8s-calico--apiserver--84d9dbc967--bnvcp-eth0" Apr 21 10:18:01.417840 containerd[2109]: time="2026-04-21T10:18:01.417545106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-lpv22,Uid:3fc0e5e1-29eb-4eba-bbc3-f696b0a92007,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8\"" Apr 21 10:18:01.538050 systemd[1]: run-containerd-runc-k8s.io-9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8-runc.wV1rOF.mount: Deactivated successfully. Apr 21 10:18:01.549724 containerd[2109]: time="2026-04-21T10:18:01.539446441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n6ppd,Uid:9a490155-4011-4010-b8d7-bb01de1814bf,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14\"" Apr 21 10:18:01.669400 containerd[2109]: time="2026-04-21T10:18:01.669167593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9f7nn,Uid:befe1eda-78f2-4643-854f-76cc3bc600cc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33\"" Apr 21 10:18:01.706460 containerd[2109]: time="2026-04-21T10:18:01.706311990Z" level=info msg="CreateContainer within sandbox \"7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:18:01.707120 containerd[2109]: time="2026-04-21T10:18:01.676336203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:01.707120 containerd[2109]: time="2026-04-21T10:18:01.677840060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:01.707120 containerd[2109]: time="2026-04-21T10:18:01.677879872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:01.707120 containerd[2109]: time="2026-04-21T10:18:01.678063233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:01.712363 systemd-networkd[1659]: cali7137893418d: Gained IPv6LL Apr 21 10:18:01.714945 systemd-networkd[1659]: calidcc8147b936: Gained IPv6LL Apr 21 10:18:01.720507 containerd[2109]: time="2026-04-21T10:18:01.720460370Z" level=info msg="CreateContainer within sandbox \"9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:18:01.967265 systemd-networkd[1659]: cali4779a550c6f: Gained IPv6LL Apr 21 10:18:01.989140 containerd[2109]: time="2026-04-21T10:18:01.988313034Z" level=info msg="CreateContainer within sandbox \"7cf4f63ce97b6b7c6f8a5e5a2b34ef72485a7149c642cb36d4b1cd6389e51b14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"817c87d45326bf9de4e74a809548c6066b6fed6315ad78debdd57db9f985f827\"" Apr 21 10:18:01.991394 containerd[2109]: time="2026-04-21T10:18:01.991241295Z" level=info msg="StartContainer for \"817c87d45326bf9de4e74a809548c6066b6fed6315ad78debdd57db9f985f827\"" Apr 21 10:18:02.040622 containerd[2109]: time="2026-04-21T10:18:02.036452298Z" level=info msg="CreateContainer within sandbox \"9dbc889ae3548865a082eda82966cbad9ea67dbdfa89a78dd6fadd6e17e767e8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75cf9da37cccdb63fbd6a9c2d1be564b786a7530bb3f7130d3090e9bf89b2646\"" Apr 21 10:18:02.041293 containerd[2109]: time="2026-04-21T10:18:02.041254699Z" level=info msg="StartContainer for \"75cf9da37cccdb63fbd6a9c2d1be564b786a7530bb3f7130d3090e9bf89b2646\"" Apr 21 10:18:02.067586 systemd-networkd[1659]: calid8fbdf0763b: Link UP Apr 21 10:18:02.068753 systemd-networkd[1659]: calid8fbdf0763b: Gained carrier Apr 21 10:18:02.143293 containerd[2109]: time="2026-04-21T10:18:02.064153500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:02.143293 containerd[2109]: time="2026-04-21T10:18:02.064235290Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:02.143293 containerd[2109]: time="2026-04-21T10:18:02.064260144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:02.143293 containerd[2109]: time="2026-04-21T10:18:02.064378063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:02.153075 containerd[2109]: time="2026-04-21T10:18:02.151774703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-x8phj,Uid:b336af44-2e6f-48b3-8a64-c248629bc9bc,Namespace:calico-system,Attempt:0,} returns sandbox id \"6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54\"" Apr 21 10:18:02.155297 containerd[2109]: time="2026-04-21T10:18:02.155236427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f6d597596-vzm6n,Uid:022e9cdc-d1df-4cf1-836a-2007c1cb8d2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09\"" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.476 [INFO][5136] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0 whisker-68f89b8cdf- calico-system 610088b2-b604-4950-a2ad-d6a215850163 923 0 2026-04-21 10:18:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:68f89b8cdf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-26 whisker-68f89b8cdf-zct9n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid8fbdf0763b [] [] }} ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.481 [INFO][5136] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.751 [INFO][5233] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" HandleID="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Workload="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.764 [INFO][5233] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" HandleID="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Workload="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000102300), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-26", "pod":"whisker-68f89b8cdf-zct9n", "timestamp":"2026-04-21 10:18:01.751951158 +0000 UTC"}, Hostname:"ip-172-31-28-26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001862c0)} Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.764 [INFO][5233] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.765 [INFO][5233] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.767 [INFO][5233] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-26' Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.776 [INFO][5233] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.795 [INFO][5233] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.817 [INFO][5233] ipam/ipam.go 526: Trying affinity for 192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.836 [INFO][5233] ipam/ipam.go 160: Attempting to load block cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.865 [INFO][5233] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.37.192/26 host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.865 [INFO][5233] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.37.192/26 handle="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.892 [INFO][5233] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.933 [INFO][5233] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.37.192/26 handle="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.995 [INFO][5233] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.37.200/26] block=192.168.37.192/26 handle="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.995 [INFO][5233] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.37.200/26] handle="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" host="ip-172-31-28-26" Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.995 [INFO][5233] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:18:02.166606 containerd[2109]: 2026-04-21 10:18:01.995 [INFO][5233] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.37.200/26] IPv6=[] ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" HandleID="k8s-pod-network.265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Workload="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.173514 containerd[2109]: 2026-04-21 10:18:02.050 [INFO][5136] cni-plugin/k8s.go 418: Populated endpoint ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0", GenerateName:"whisker-68f89b8cdf-", Namespace:"calico-system", SelfLink:"", UID:"610088b2-b604-4950-a2ad-d6a215850163", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68f89b8cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"", Pod:"whisker-68f89b8cdf-zct9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8fbdf0763b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:02.173514 containerd[2109]: 2026-04-21 10:18:02.050 [INFO][5136] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.37.200/32] ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.173514 containerd[2109]: 2026-04-21 10:18:02.051 [INFO][5136] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid8fbdf0763b ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.173514 containerd[2109]: 2026-04-21 10:18:02.083 [INFO][5136] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.173514 containerd[2109]: 2026-04-21 10:18:02.089 [INFO][5136] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0", GenerateName:"whisker-68f89b8cdf-", Namespace:"calico-system", SelfLink:"", UID:"610088b2-b604-4950-a2ad-d6a215850163", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 18, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"68f89b8cdf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-26", ContainerID:"265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b", Pod:"whisker-68f89b8cdf-zct9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.37.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid8fbdf0763b", MAC:"26:90:69:46:3f:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:18:02.173514 containerd[2109]: 2026-04-21 10:18:02.133 [INFO][5136] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b" Namespace="calico-system" Pod="whisker-68f89b8cdf-zct9n" WorkloadEndpoint="ip--172--31--28--26-k8s-whisker--68f89b8cdf--zct9n-eth0" Apr 21 10:18:02.224507 systemd-networkd[1659]: cali8f86cc941cb: Gained IPv6LL Apr 21 10:18:02.431064 containerd[2109]: time="2026-04-21T10:18:02.430478609Z" level=info msg="StartContainer for \"817c87d45326bf9de4e74a809548c6066b6fed6315ad78debdd57db9f985f827\" returns successfully" Apr 21 10:18:02.447057 containerd[2109]: time="2026-04-21T10:18:02.444714479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:18:02.447057 containerd[2109]: time="2026-04-21T10:18:02.444806755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:18:02.447057 containerd[2109]: time="2026-04-21T10:18:02.444826449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:02.447057 containerd[2109]: time="2026-04-21T10:18:02.444960090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:18:02.506661 systemd[1]: run-containerd-runc-k8s.io-7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09-runc.9yn3z3.mount: Deactivated successfully. Apr 21 10:18:02.518975 containerd[2109]: time="2026-04-21T10:18:02.518869422Z" level=info msg="StartContainer for \"75cf9da37cccdb63fbd6a9c2d1be564b786a7530bb3f7130d3090e9bf89b2646\" returns successfully" Apr 21 10:18:02.775181 containerd[2109]: time="2026-04-21T10:18:02.772964601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84d9dbc967-bnvcp,Uid:c4b99c43-504c-45a9-acca-981cff89876f,Namespace:calico-system,Attempt:0,} returns sandbox id \"070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a\"" Apr 21 10:18:02.858079 containerd[2109]: time="2026-04-21T10:18:02.855700268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68f89b8cdf-zct9n,Uid:610088b2-b604-4950-a2ad-d6a215850163,Namespace:calico-system,Attempt:0,} returns sandbox id \"265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b\"" Apr 21 10:18:02.863300 systemd-networkd[1659]: cali5bd490b7133: Gained IPv6LL Apr 21 10:18:03.007660 containerd[2109]: time="2026-04-21T10:18:03.007589817Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:18:03.020842 containerd[2109]: time="2026-04-21T10:18:03.020778277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 4.469888398s" Apr 21 10:18:03.021118 containerd[2109]: time="2026-04-21T10:18:03.021093367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:18:03.026820 containerd[2109]: time="2026-04-21T10:18:03.026301873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:18:03.045466 containerd[2109]: time="2026-04-21T10:18:03.044935785Z" level=info msg="CreateContainer within sandbox \"fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:18:03.087943 containerd[2109]: time="2026-04-21T10:18:03.087893393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:03.127473 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:03.119822 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:03.119997 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:03.142490 containerd[2109]: time="2026-04-21T10:18:03.137093851Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:03.142490 containerd[2109]: time="2026-04-21T10:18:03.138362813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:03.148911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3629597250.mount: Deactivated successfully. Apr 21 10:18:03.160015 systemd-networkd[1659]: vxlan.calico: Link UP Apr 21 10:18:03.161876 containerd[2109]: time="2026-04-21T10:18:03.160330680Z" level=info msg="CreateContainer within sandbox \"fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e4280d09be1e4b5b21e6dfa370c594998fb462dbb286082612b36dd9eb358110\"" Apr 21 10:18:03.163495 (udev-worker)[4701]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:18:03.164099 systemd-networkd[1659]: vxlan.calico: Gained carrier Apr 21 10:18:03.171552 containerd[2109]: time="2026-04-21T10:18:03.166659512Z" level=info msg="StartContainer for \"e4280d09be1e4b5b21e6dfa370c594998fb462dbb286082612b36dd9eb358110\"" Apr 21 10:18:03.376281 containerd[2109]: time="2026-04-21T10:18:03.376211687Z" level=info msg="StartContainer for \"e4280d09be1e4b5b21e6dfa370c594998fb462dbb286082612b36dd9eb358110\" returns successfully" Apr 21 10:18:03.524047 kubelet[3556]: I0421 10:18:03.521894 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n6ppd" podStartSLOduration=41.515444341 podStartE2EDuration="41.515444341s" podCreationTimestamp="2026-04-21 10:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:03.502438977 +0000 UTC m=+47.036541455" watchObservedRunningTime="2026-04-21 10:18:03.515444341 +0000 UTC m=+47.049546819" Apr 21 10:18:03.595893 kubelet[3556]: I0421 10:18:03.592658 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-lpv22" podStartSLOduration=41.592636589 podStartE2EDuration="41.592636589s" podCreationTimestamp="2026-04-21 10:17:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:18:03.592539108 +0000 UTC m=+47.126641584" watchObservedRunningTime="2026-04-21 10:18:03.592636589 +0000 UTC m=+47.126739065" Apr 21 10:18:03.890457 systemd-networkd[1659]: calid8fbdf0763b: Gained IPv6LL Apr 21 10:18:04.912745 systemd-networkd[1659]: vxlan.calico: Gained IPv6LL Apr 21 10:18:05.658312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2115286366.mount: Deactivated successfully. Apr 21 10:18:06.252470 containerd[2109]: time="2026-04-21T10:18:06.252420596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:06.254161 containerd[2109]: time="2026-04-21T10:18:06.253976415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:18:06.255677 containerd[2109]: time="2026-04-21T10:18:06.255532152Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:06.258705 containerd[2109]: time="2026-04-21T10:18:06.258634680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:06.260142 containerd[2109]: time="2026-04-21T10:18:06.259508534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.233151746s" Apr 21 10:18:06.260142 containerd[2109]: time="2026-04-21T10:18:06.259553261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:18:06.261765 containerd[2109]: time="2026-04-21T10:18:06.261612817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:18:06.264753 containerd[2109]: time="2026-04-21T10:18:06.264572394Z" level=info msg="CreateContainer within sandbox \"f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:18:06.288182 containerd[2109]: time="2026-04-21T10:18:06.288047072Z" level=info msg="CreateContainer within sandbox \"f7cf419da63dfec32a3a75d44e0fa407ef35d5b31f98f96e12d6a1ce39144f33\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609\"" Apr 21 10:18:06.289636 containerd[2109]: time="2026-04-21T10:18:06.289585973Z" level=info msg="StartContainer for \"d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609\"" Apr 21 10:18:06.350337 systemd[1]: run-containerd-runc-k8s.io-d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609-runc.7N4im5.mount: Deactivated successfully. Apr 21 10:18:06.401675 containerd[2109]: time="2026-04-21T10:18:06.401625624Z" level=info msg="StartContainer for \"d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609\" returns successfully" Apr 21 10:18:06.646203 kubelet[3556]: I0421 10:18:06.644665 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-9f7nn" podStartSLOduration=26.093910607 podStartE2EDuration="30.644589325s" podCreationTimestamp="2026-04-21 10:17:36 +0000 UTC" firstStartedPulling="2026-04-21 10:18:01.71007962 +0000 UTC m=+45.244182078" lastFinishedPulling="2026-04-21 10:18:06.260758323 +0000 UTC m=+49.794860796" observedRunningTime="2026-04-21 10:18:06.643891716 +0000 UTC m=+50.177994192" watchObservedRunningTime="2026-04-21 10:18:06.644589325 +0000 UTC m=+50.178691802" Apr 21 10:18:07.668866 systemd[1]: run-containerd-runc-k8s.io-d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609-runc.ghyDA7.mount: Deactivated successfully. Apr 21 10:18:07.865114 ntpd[2061]: Listen normally on 6 vxlan.calico 192.168.37.192:123 Apr 21 10:18:07.865378 ntpd[2061]: Listen normally on 7 cali831be638f86 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 6 vxlan.calico 192.168.37.192:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 7 cali831be638f86 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 8 califd8d989dd2d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 9 calidcc8147b936 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 10 cali7137893418d [fe80::ecee:eeff:feee:eeee%7]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 11 cali4779a550c6f [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 12 cali8f86cc941cb [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 13 cali5bd490b7133 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 14 calid8fbdf0763b [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:18:07.865766 ntpd[2061]: 21 Apr 10:18:07 ntpd[2061]: Listen normally on 15 vxlan.calico [fe80::64cb:64ff:fe71:cea1%12]:123 Apr 21 10:18:07.865446 ntpd[2061]: Listen normally on 8 califd8d989dd2d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 21 10:18:07.865488 ntpd[2061]: Listen normally on 9 calidcc8147b936 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 21 10:18:07.865530 ntpd[2061]: Listen normally on 10 cali7137893418d [fe80::ecee:eeff:feee:eeee%7]:123 Apr 21 10:18:07.865569 ntpd[2061]: Listen normally on 11 cali4779a550c6f [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:18:07.865608 ntpd[2061]: Listen normally on 12 cali8f86cc941cb [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:18:07.865649 ntpd[2061]: Listen normally on 13 cali5bd490b7133 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:18:07.865699 ntpd[2061]: Listen normally on 14 calid8fbdf0763b [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:18:07.865743 ntpd[2061]: Listen normally on 15 vxlan.calico [fe80::64cb:64ff:fe71:cea1%12]:123 Apr 21 10:18:08.680996 systemd[1]: run-containerd-runc-k8s.io-d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609-runc.XWmIBG.mount: Deactivated successfully. Apr 21 10:18:09.525911 containerd[2109]: time="2026-04-21T10:18:09.525862520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:09.527727 containerd[2109]: time="2026-04-21T10:18:09.527658652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:18:09.530567 containerd[2109]: time="2026-04-21T10:18:09.530437525Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:09.534269 containerd[2109]: time="2026-04-21T10:18:09.534224218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:09.535233 containerd[2109]: time="2026-04-21T10:18:09.535196306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.273533161s" Apr 21 10:18:09.535336 containerd[2109]: time="2026-04-21T10:18:09.535242228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:18:09.536829 containerd[2109]: time="2026-04-21T10:18:09.536798868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:18:09.545390 containerd[2109]: time="2026-04-21T10:18:09.545350864Z" level=info msg="CreateContainer within sandbox \"6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:18:09.572437 containerd[2109]: time="2026-04-21T10:18:09.572363025Z" level=info msg="CreateContainer within sandbox \"6ffca8b6882cafd340955c936fbca891de78eaf2810f9796ee9737fb81e99e54\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a860d70219c81cc7b5d0fed75de494101cbabeea4874a56d951fe9388c110f6b\"" Apr 21 10:18:09.577293 containerd[2109]: time="2026-04-21T10:18:09.574947363Z" level=info msg="StartContainer for \"a860d70219c81cc7b5d0fed75de494101cbabeea4874a56d951fe9388c110f6b\"" Apr 21 10:18:09.675296 containerd[2109]: time="2026-04-21T10:18:09.675248469Z" level=info msg="StartContainer for \"a860d70219c81cc7b5d0fed75de494101cbabeea4874a56d951fe9388c110f6b\" returns successfully" Apr 21 10:18:10.676518 kubelet[3556]: I0421 10:18:10.676438 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84d9dbc967-x8phj" podStartSLOduration=27.301471133 podStartE2EDuration="34.676414747s" podCreationTimestamp="2026-04-21 10:17:36 +0000 UTC" firstStartedPulling="2026-04-21 10:18:02.16163899 +0000 UTC m=+45.695741447" lastFinishedPulling="2026-04-21 10:18:09.536582591 +0000 UTC m=+53.070685061" observedRunningTime="2026-04-21 10:18:10.674600175 +0000 UTC m=+54.208702666" watchObservedRunningTime="2026-04-21 10:18:10.676414747 +0000 UTC m=+54.210517222" Apr 21 10:18:12.976406 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:12.979464 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:12.976463 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:13.236258 containerd[2109]: time="2026-04-21T10:18:13.236132500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:13.238957 containerd[2109]: time="2026-04-21T10:18:13.238886332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:18:13.241208 containerd[2109]: time="2026-04-21T10:18:13.241158236Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:13.245149 containerd[2109]: time="2026-04-21T10:18:13.245018367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:13.246235 containerd[2109]: time="2026-04-21T10:18:13.245750406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.708910258s" Apr 21 10:18:13.246235 containerd[2109]: time="2026-04-21T10:18:13.245794941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:18:13.265271 containerd[2109]: time="2026-04-21T10:18:13.265229064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:18:13.574421 containerd[2109]: time="2026-04-21T10:18:13.574359863Z" level=info msg="CreateContainer within sandbox \"7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:18:13.669911 containerd[2109]: time="2026-04-21T10:18:13.669856189Z" level=info msg="CreateContainer within sandbox \"7783c0f199295dcfdaa20b747fc6f0a596bb9dd3e686da8b30ea34080079fe09\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"05eaa5dde0a21edc54634382835aa2575d4f2f46f5b72a670dda59fe813d527e\"" Apr 21 10:18:13.676423 containerd[2109]: time="2026-04-21T10:18:13.676005755Z" level=info msg="StartContainer for \"05eaa5dde0a21edc54634382835aa2575d4f2f46f5b72a670dda59fe813d527e\"" Apr 21 10:18:13.947253 containerd[2109]: time="2026-04-21T10:18:13.944498005Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:13.947253 containerd[2109]: time="2026-04-21T10:18:13.946321366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:18:13.950776 containerd[2109]: time="2026-04-21T10:18:13.950717077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 685.440812ms" Apr 21 10:18:13.950776 containerd[2109]: time="2026-04-21T10:18:13.950790044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:18:13.954971 containerd[2109]: time="2026-04-21T10:18:13.954836312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:18:13.962062 containerd[2109]: time="2026-04-21T10:18:13.962005455Z" level=info msg="CreateContainer within sandbox \"070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:18:14.047009 containerd[2109]: time="2026-04-21T10:18:14.046080822Z" level=info msg="CreateContainer within sandbox \"070cfad0fc4eece3a9617017abeca9f94988311c2a8d996a28a23017f7f3bd8a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96032db3da623cacec4436f6b2165bc3fb79c6e59622656120472afad028bd74\"" Apr 21 10:18:14.050308 containerd[2109]: time="2026-04-21T10:18:14.050268184Z" level=info msg="StartContainer for \"96032db3da623cacec4436f6b2165bc3fb79c6e59622656120472afad028bd74\"" Apr 21 10:18:14.220751 containerd[2109]: time="2026-04-21T10:18:14.220627923Z" level=info msg="StartContainer for \"05eaa5dde0a21edc54634382835aa2575d4f2f46f5b72a670dda59fe813d527e\" returns successfully" Apr 21 10:18:14.295048 containerd[2109]: time="2026-04-21T10:18:14.294810280Z" level=info msg="StartContainer for \"96032db3da623cacec4436f6b2165bc3fb79c6e59622656120472afad028bd74\" returns successfully" Apr 21 10:18:15.024292 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:15.028795 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:15.024338 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:15.302978 kubelet[3556]: I0421 10:18:15.251645 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f6d597596-vzm6n" podStartSLOduration=27.122170226 podStartE2EDuration="38.215703036s" podCreationTimestamp="2026-04-21 10:17:37 +0000 UTC" firstStartedPulling="2026-04-21 10:18:02.171432132 +0000 UTC m=+45.705534599" lastFinishedPulling="2026-04-21 10:18:13.264964953 +0000 UTC m=+56.799067409" observedRunningTime="2026-04-21 10:18:15.166497 +0000 UTC m=+58.700599475" watchObservedRunningTime="2026-04-21 10:18:15.215703036 +0000 UTC m=+58.749805512" Apr 21 10:18:15.305582 kubelet[3556]: I0421 10:18:15.305523 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84d9dbc967-bnvcp" podStartSLOduration=28.14400062 podStartE2EDuration="39.305504887s" podCreationTimestamp="2026-04-21 10:17:36 +0000 UTC" firstStartedPulling="2026-04-21 10:18:02.790150495 +0000 UTC m=+46.324252958" lastFinishedPulling="2026-04-21 10:18:13.951654753 +0000 UTC m=+57.485757225" observedRunningTime="2026-04-21 10:18:15.206490006 +0000 UTC m=+58.740592483" watchObservedRunningTime="2026-04-21 10:18:15.305504887 +0000 UTC m=+58.839607372" Apr 21 10:18:16.015742 containerd[2109]: time="2026-04-21T10:18:16.015686735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:16.018551 containerd[2109]: time="2026-04-21T10:18:16.017496228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:18:16.020102 containerd[2109]: time="2026-04-21T10:18:16.019359938Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:16.024916 containerd[2109]: time="2026-04-21T10:18:16.024722275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:16.028412 containerd[2109]: time="2026-04-21T10:18:16.027281435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.072153186s" Apr 21 10:18:16.028412 containerd[2109]: time="2026-04-21T10:18:16.028367537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:18:16.054113 containerd[2109]: time="2026-04-21T10:18:16.053965899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:18:16.064363 containerd[2109]: time="2026-04-21T10:18:16.063788499Z" level=info msg="CreateContainer within sandbox \"265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:18:16.100457 containerd[2109]: time="2026-04-21T10:18:16.100313738Z" level=info msg="CreateContainer within sandbox \"265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"70b75f3998de865a4b8142c5e91d082442aa889109e6e2dac86e3dbe594f8fce\"" Apr 21 10:18:16.106414 containerd[2109]: time="2026-04-21T10:18:16.104504600Z" level=info msg="StartContainer for \"70b75f3998de865a4b8142c5e91d082442aa889109e6e2dac86e3dbe594f8fce\"" Apr 21 10:18:16.273395 systemd[1]: Started sshd@7-172.31.28.26:22-50.85.169.122:60076.service - OpenSSH per-connection server daemon (50.85.169.122:60076). Apr 21 10:18:16.388999 systemd[1]: run-containerd-runc-k8s.io-70b75f3998de865a4b8142c5e91d082442aa889109e6e2dac86e3dbe594f8fce-runc.9AMzvj.mount: Deactivated successfully. Apr 21 10:18:16.581052 containerd[2109]: time="2026-04-21T10:18:16.580990543Z" level=info msg="StartContainer for \"70b75f3998de865a4b8142c5e91d082442aa889109e6e2dac86e3dbe594f8fce\" returns successfully" Apr 21 10:18:17.077083 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:17.071121 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:17.071146 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:17.485264 sshd[5895]: Accepted publickey for core from 50.85.169.122 port 60076 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:17.491227 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:17.524053 systemd-logind[2075]: New session 8 of user core. Apr 21 10:18:17.530477 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:18:18.144834 containerd[2109]: time="2026-04-21T10:18:18.144781470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:18.152775 containerd[2109]: time="2026-04-21T10:18:18.152553148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:18:18.156079 containerd[2109]: time="2026-04-21T10:18:18.155461213Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:18.161243 containerd[2109]: time="2026-04-21T10:18:18.161189367Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:18.162782 containerd[2109]: time="2026-04-21T10:18:18.162723318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.108712102s" Apr 21 10:18:18.162782 containerd[2109]: time="2026-04-21T10:18:18.162773251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:18:18.172932 containerd[2109]: time="2026-04-21T10:18:18.172226772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:18:18.194723 containerd[2109]: time="2026-04-21T10:18:18.194678771Z" level=info msg="CreateContainer within sandbox \"fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:18:18.240017 containerd[2109]: time="2026-04-21T10:18:18.239944582Z" level=info msg="CreateContainer within sandbox \"fd20db5b40495a377723380eeb09fde4f36d589ff8834d54b2f2afe3c119250a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"798ef156d7db823fe440f1949f2ad3a2b11a84c10088275469561dfa978c87cc\"" Apr 21 10:18:18.253175 containerd[2109]: time="2026-04-21T10:18:18.251496148Z" level=info msg="StartContainer for \"798ef156d7db823fe440f1949f2ad3a2b11a84c10088275469561dfa978c87cc\"" Apr 21 10:18:18.388440 containerd[2109]: time="2026-04-21T10:18:18.387269330Z" level=info msg="StartContainer for \"798ef156d7db823fe440f1949f2ad3a2b11a84c10088275469561dfa978c87cc\" returns successfully" Apr 21 10:18:19.078896 sshd[5895]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:19.087004 systemd[1]: sshd@7-172.31.28.26:22-50.85.169.122:60076.service: Deactivated successfully. Apr 21 10:18:19.097396 systemd-logind[2075]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:18:19.098503 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:18:19.105434 systemd-logind[2075]: Removed session 8. Apr 21 10:18:19.291390 kubelet[3556]: I0421 10:18:19.282855 3556 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:18:19.297372 kubelet[3556]: I0421 10:18:19.297313 3556 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:18:20.408791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129972535.mount: Deactivated successfully. Apr 21 10:18:20.514522 containerd[2109]: time="2026-04-21T10:18:20.465398620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:20.515224 containerd[2109]: time="2026-04-21T10:18:20.480078579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:18:20.515224 containerd[2109]: time="2026-04-21T10:18:20.489571754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.317277545s" Apr 21 10:18:20.515224 containerd[2109]: time="2026-04-21T10:18:20.515140103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:18:20.515963 containerd[2109]: time="2026-04-21T10:18:20.515913321Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:20.517638 containerd[2109]: time="2026-04-21T10:18:20.517183163Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:18:20.542645 containerd[2109]: time="2026-04-21T10:18:20.542510044Z" level=info msg="CreateContainer within sandbox \"265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:18:20.575513 containerd[2109]: time="2026-04-21T10:18:20.575468246Z" level=info msg="CreateContainer within sandbox \"265f2cae452c1027b724563b20bc57c9f78a16ddb24c8b1cc7685c5a6bc0a95b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"73fe9c98cba2872d8187c181e22429c58e90f82c67468f93d1678a6e76bdfe5e\"" Apr 21 10:18:20.577083 containerd[2109]: time="2026-04-21T10:18:20.576537704Z" level=info msg="StartContainer for \"73fe9c98cba2872d8187c181e22429c58e90f82c67468f93d1678a6e76bdfe5e\"" Apr 21 10:18:20.859078 containerd[2109]: time="2026-04-21T10:18:20.858236946Z" level=info msg="StartContainer for \"73fe9c98cba2872d8187c181e22429c58e90f82c67468f93d1678a6e76bdfe5e\" returns successfully" Apr 21 10:18:20.975520 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:20.977199 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:20.975572 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:21.239569 kubelet[3556]: I0421 10:18:21.228697 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rkknl" podStartSLOduration=24.563214386 podStartE2EDuration="44.189153706s" podCreationTimestamp="2026-04-21 10:17:37 +0000 UTC" firstStartedPulling="2026-04-21 10:17:58.545066793 +0000 UTC m=+42.079169248" lastFinishedPulling="2026-04-21 10:18:18.171006086 +0000 UTC m=+61.705108568" observedRunningTime="2026-04-21 10:18:19.208971248 +0000 UTC m=+62.743073723" watchObservedRunningTime="2026-04-21 10:18:21.189153706 +0000 UTC m=+64.723256182" Apr 21 10:18:21.249042 kubelet[3556]: I0421 10:18:21.247671 3556 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-68f89b8cdf-zct9n" podStartSLOduration=3.586532034 podStartE2EDuration="21.247640025s" podCreationTimestamp="2026-04-21 10:18:00 +0000 UTC" firstStartedPulling="2026-04-21 10:18:02.858435053 +0000 UTC m=+46.392537507" lastFinishedPulling="2026-04-21 10:18:20.519543032 +0000 UTC m=+64.053645498" observedRunningTime="2026-04-21 10:18:21.240524686 +0000 UTC m=+64.774627152" watchObservedRunningTime="2026-04-21 10:18:21.247640025 +0000 UTC m=+64.781742500" Apr 21 10:18:23.023261 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:23.023271 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:23.025049 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:24.258427 systemd[1]: Started sshd@8-172.31.28.26:22-50.85.169.122:41548.service - OpenSSH per-connection server daemon (50.85.169.122:41548). Apr 21 10:18:25.345662 sshd[6054]: Accepted publickey for core from 50.85.169.122 port 41548 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:25.350920 sshd[6054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:25.357654 systemd-logind[2075]: New session 9 of user core. Apr 21 10:18:25.363410 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:18:26.487310 sshd[6054]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:26.492234 systemd[1]: sshd@8-172.31.28.26:22-50.85.169.122:41548.service: Deactivated successfully. Apr 21 10:18:26.497512 systemd-logind[2075]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:18:26.497984 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:18:26.502544 systemd-logind[2075]: Removed session 9. Apr 21 10:18:31.650394 systemd[1]: Started sshd@9-172.31.28.26:22-50.85.169.122:52072.service - OpenSSH per-connection server daemon (50.85.169.122:52072). Apr 21 10:18:32.717671 sshd[6108]: Accepted publickey for core from 50.85.169.122 port 52072 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:32.722283 sshd[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:32.728879 systemd-logind[2075]: New session 10 of user core. Apr 21 10:18:32.734713 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:18:33.607003 sshd[6108]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:33.612262 systemd-logind[2075]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:18:33.614185 systemd[1]: sshd@9-172.31.28.26:22-50.85.169.122:52072.service: Deactivated successfully. Apr 21 10:18:33.621124 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:18:33.623122 systemd-logind[2075]: Removed session 10. Apr 21 10:18:38.785009 systemd[1]: Started sshd@10-172.31.28.26:22-50.85.169.122:52084.service - OpenSSH per-connection server daemon (50.85.169.122:52084). Apr 21 10:18:39.872079 sshd[6167]: Accepted publickey for core from 50.85.169.122 port 52084 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:39.882408 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:39.888778 systemd-logind[2075]: New session 11 of user core. Apr 21 10:18:39.891651 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:18:41.008236 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:41.011996 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:41.008266 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:41.049573 sshd[6167]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:41.055072 systemd[1]: sshd@10-172.31.28.26:22-50.85.169.122:52084.service: Deactivated successfully. Apr 21 10:18:41.059756 systemd-logind[2075]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:18:41.059852 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:18:41.063931 systemd-logind[2075]: Removed session 11. Apr 21 10:18:41.222648 systemd[1]: Started sshd@11-172.31.28.26:22-50.85.169.122:54824.service - OpenSSH per-connection server daemon (50.85.169.122:54824). Apr 21 10:18:42.240770 sshd[6185]: Accepted publickey for core from 50.85.169.122 port 54824 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:42.242666 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:42.247982 systemd-logind[2075]: New session 12 of user core. Apr 21 10:18:42.252347 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:18:43.061046 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:43.057785 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:43.057799 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:43.160921 sshd[6185]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:43.165602 systemd[1]: sshd@11-172.31.28.26:22-50.85.169.122:54824.service: Deactivated successfully. Apr 21 10:18:43.171411 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:18:43.171653 systemd-logind[2075]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:18:43.174483 systemd-logind[2075]: Removed session 12. Apr 21 10:18:43.333403 systemd[1]: Started sshd@12-172.31.28.26:22-50.85.169.122:54838.service - OpenSSH per-connection server daemon (50.85.169.122:54838). Apr 21 10:18:44.361295 sshd[6197]: Accepted publickey for core from 50.85.169.122 port 54838 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:44.362946 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:44.368773 systemd-logind[2075]: New session 13 of user core. Apr 21 10:18:44.374377 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:18:45.228239 sshd[6197]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:45.235772 systemd[1]: sshd@12-172.31.28.26:22-50.85.169.122:54838.service: Deactivated successfully. Apr 21 10:18:45.236253 systemd-logind[2075]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:18:45.239240 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:18:45.241456 systemd-logind[2075]: Removed session 13. Apr 21 10:18:50.391410 systemd[1]: Started sshd@13-172.31.28.26:22-50.85.169.122:51840.service - OpenSSH per-connection server daemon (50.85.169.122:51840). Apr 21 10:18:51.416290 sshd[6266]: Accepted publickey for core from 50.85.169.122 port 51840 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:51.421382 sshd[6266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:51.427763 systemd-logind[2075]: New session 14 of user core. Apr 21 10:18:51.434418 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:18:52.707971 sshd[6266]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:52.714140 systemd[1]: sshd@13-172.31.28.26:22-50.85.169.122:51840.service: Deactivated successfully. Apr 21 10:18:52.720348 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:18:52.721620 systemd-logind[2075]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:18:52.723783 systemd-logind[2075]: Removed session 14. Apr 21 10:18:52.879811 systemd[1]: Started sshd@14-172.31.28.26:22-50.85.169.122:51854.service - OpenSSH per-connection server daemon (50.85.169.122:51854). Apr 21 10:18:53.039345 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:18:53.041355 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:18:53.039379 systemd-resolved[1988]: Flushed all caches. Apr 21 10:18:53.889293 sshd[6285]: Accepted publickey for core from 50.85.169.122 port 51854 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:53.890965 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:53.896422 systemd-logind[2075]: New session 15 of user core. Apr 21 10:18:53.899609 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:18:55.125468 sshd[6285]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:55.132785 systemd[1]: sshd@14-172.31.28.26:22-50.85.169.122:51854.service: Deactivated successfully. Apr 21 10:18:55.138296 systemd-logind[2075]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:18:55.138518 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:18:55.141460 systemd-logind[2075]: Removed session 15. Apr 21 10:18:55.294632 systemd[1]: Started sshd@15-172.31.28.26:22-50.85.169.122:51866.service - OpenSSH per-connection server daemon (50.85.169.122:51866). Apr 21 10:18:56.320882 sshd[6299]: Accepted publickey for core from 50.85.169.122 port 51866 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:56.323937 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:56.338653 systemd-logind[2075]: New session 16 of user core. Apr 21 10:18:56.342467 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:18:57.929246 sshd[6299]: pam_unix(sshd:session): session closed for user core Apr 21 10:18:57.939677 systemd[1]: sshd@15-172.31.28.26:22-50.85.169.122:51866.service: Deactivated successfully. Apr 21 10:18:57.945942 systemd-logind[2075]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:18:57.946483 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:18:57.948317 systemd-logind[2075]: Removed session 16. Apr 21 10:18:58.093770 systemd[1]: Started sshd@16-172.31.28.26:22-50.85.169.122:51870.service - OpenSSH per-connection server daemon (50.85.169.122:51870). Apr 21 10:18:59.082322 systemd[1]: run-containerd-runc-k8s.io-984cff018b863c39e4908377efeff55a14f51f76ffd64ad76130da6dd5e3e1de-runc.foILbs.mount: Deactivated successfully. Apr 21 10:18:59.091244 sshd[6331]: Accepted publickey for core from 50.85.169.122 port 51870 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:18:59.092647 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:18:59.099624 systemd-logind[2075]: New session 17 of user core. Apr 21 10:18:59.105458 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:19:00.750841 sshd[6331]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:00.759986 systemd-logind[2075]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:19:00.761205 systemd[1]: sshd@16-172.31.28.26:22-50.85.169.122:51870.service: Deactivated successfully. Apr 21 10:19:00.769853 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:19:00.771567 systemd-logind[2075]: Removed session 17. Apr 21 10:19:00.927441 systemd[1]: Started sshd@17-172.31.28.26:22-50.85.169.122:54768.service - OpenSSH per-connection server daemon (50.85.169.122:54768). Apr 21 10:19:01.041236 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:19:01.039344 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:19:01.039382 systemd-resolved[1988]: Flushed all caches. Apr 21 10:19:01.990300 sshd[6363]: Accepted publickey for core from 50.85.169.122 port 54768 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:01.993977 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:02.016090 systemd-logind[2075]: New session 18 of user core. Apr 21 10:19:02.026359 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:19:03.149566 sshd[6363]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:03.162826 systemd-logind[2075]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:19:03.163218 systemd[1]: sshd@17-172.31.28.26:22-50.85.169.122:54768.service: Deactivated successfully. Apr 21 10:19:03.172670 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:19:03.175082 systemd-logind[2075]: Removed session 18. Apr 21 10:19:08.325521 systemd[1]: Started sshd@18-172.31.28.26:22-50.85.169.122:54776.service - OpenSSH per-connection server daemon (50.85.169.122:54776). Apr 21 10:19:08.749089 systemd[1]: run-containerd-runc-k8s.io-d6d5411d583edde07d9ec7cd53a02b4a420b6379202035f01a585efca3878609-runc.dljy6w.mount: Deactivated successfully. Apr 21 10:19:09.403545 sshd[6379]: Accepted publickey for core from 50.85.169.122 port 54776 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:09.408139 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:09.418269 systemd-logind[2075]: New session 19 of user core. Apr 21 10:19:09.424392 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:19:10.753516 sshd[6379]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:10.777483 systemd-logind[2075]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:19:10.779152 systemd[1]: sshd@18-172.31.28.26:22-50.85.169.122:54776.service: Deactivated successfully. Apr 21 10:19:10.786243 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:19:10.792110 systemd-logind[2075]: Removed session 19. Apr 21 10:19:11.023412 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:19:11.025375 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:19:11.023440 systemd-resolved[1988]: Flushed all caches. Apr 21 10:19:13.071400 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:19:13.073091 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:19:13.071409 systemd-resolved[1988]: Flushed all caches. Apr 21 10:19:15.043302 systemd[1]: run-containerd-runc-k8s.io-05eaa5dde0a21edc54634382835aa2575d4f2f46f5b72a670dda59fe813d527e-runc.9Cc57z.mount: Deactivated successfully. Apr 21 10:19:15.912843 systemd[1]: Started sshd@19-172.31.28.26:22-50.85.169.122:40556.service - OpenSSH per-connection server daemon (50.85.169.122:40556). Apr 21 10:19:16.974493 sshd[6433]: Accepted publickey for core from 50.85.169.122 port 40556 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:16.979350 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:16.986743 systemd-logind[2075]: New session 20 of user core. Apr 21 10:19:16.991423 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:19:17.948537 sshd[6433]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:17.957307 systemd-logind[2075]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:19:17.960779 systemd[1]: sshd@19-172.31.28.26:22-50.85.169.122:40556.service: Deactivated successfully. Apr 21 10:19:17.966978 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:19:17.972820 systemd-logind[2075]: Removed session 20. Apr 21 10:19:19.087396 systemd-resolved[1988]: Under memory pressure, flushing caches. Apr 21 10:19:19.089200 systemd-journald[1577]: Under memory pressure, flushing caches. Apr 21 10:19:19.087427 systemd-resolved[1988]: Flushed all caches. Apr 21 10:19:23.116357 systemd[1]: Started sshd@20-172.31.28.26:22-50.85.169.122:34044.service - OpenSSH per-connection server daemon (50.85.169.122:34044). Apr 21 10:19:24.125915 sshd[6451]: Accepted publickey for core from 50.85.169.122 port 34044 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:24.129761 sshd[6451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:24.137091 systemd-logind[2075]: New session 21 of user core. Apr 21 10:19:24.142183 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:19:25.156677 sshd[6451]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:25.160412 systemd[1]: sshd@20-172.31.28.26:22-50.85.169.122:34044.service: Deactivated successfully. Apr 21 10:19:25.165182 systemd-logind[2075]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:19:25.167666 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:19:25.169098 systemd-logind[2075]: Removed session 21. Apr 21 10:19:30.316495 systemd[1]: Started sshd@21-172.31.28.26:22-50.85.169.122:41230.service - OpenSSH per-connection server daemon (50.85.169.122:41230). Apr 21 10:19:31.350899 sshd[6517]: Accepted publickey for core from 50.85.169.122 port 41230 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:19:31.354093 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:19:31.359757 systemd-logind[2075]: New session 22 of user core. Apr 21 10:19:31.367528 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:19:32.407114 sshd[6517]: pam_unix(sshd:session): session closed for user core Apr 21 10:19:32.410897 systemd[1]: sshd@21-172.31.28.26:22-50.85.169.122:41230.service: Deactivated successfully. Apr 21 10:19:32.416520 systemd-logind[2075]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:19:32.417361 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:19:32.419699 systemd-logind[2075]: Removed session 22. Apr 21 10:19:43.161563 update_engine[2080]: I20260421 10:19:43.161449 2080 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 21 10:19:43.161563 update_engine[2080]: I20260421 10:19:43.161533 2080 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 21 10:19:43.166790 update_engine[2080]: I20260421 10:19:43.166735 2080 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 21 10:19:43.168919 update_engine[2080]: I20260421 10:19:43.168879 2080 omaha_request_params.cc:62] Current group set to lts Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.174952 2080 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.174997 2080 update_attempter.cc:643] Scheduling an action processor start. Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.175051 2080 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.175120 2080 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.175227 2080 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.175238 2080 omaha_request_action.cc:272] Request: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: Apr 21 10:19:43.176132 update_engine[2080]: I20260421 10:19:43.175248 2080 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 10:19:43.205218 update_engine[2080]: I20260421 10:19:43.205150 2080 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 10:19:43.205824 update_engine[2080]: I20260421 10:19:43.205542 2080 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 10:19:43.205908 locksmithd[2143]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 21 10:19:43.210414 update_engine[2080]: E20260421 10:19:43.210361 2080 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 10:19:43.210521 update_engine[2080]: I20260421 10:19:43.210462 2080 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 21 10:19:47.154012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481-rootfs.mount: Deactivated successfully. Apr 21 10:19:47.207413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4-rootfs.mount: Deactivated successfully. Apr 21 10:19:47.292577 containerd[2109]: time="2026-04-21T10:19:47.251223974Z" level=info msg="shim disconnected" id=b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4 namespace=k8s.io Apr 21 10:19:47.292577 containerd[2109]: time="2026-04-21T10:19:47.292501559Z" level=warning msg="cleaning up after shim disconnected" id=b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4 namespace=k8s.io Apr 21 10:19:47.292577 containerd[2109]: time="2026-04-21T10:19:47.292523947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:47.295158 containerd[2109]: time="2026-04-21T10:19:47.249491839Z" level=info msg="shim disconnected" id=5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481 namespace=k8s.io Apr 21 10:19:47.295158 containerd[2109]: time="2026-04-21T10:19:47.294316403Z" level=warning msg="cleaning up after shim disconnected" id=5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481 namespace=k8s.io Apr 21 10:19:47.295158 containerd[2109]: time="2026-04-21T10:19:47.294333133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:19:47.793098 kubelet[3556]: I0421 10:19:47.792703 3556 scope.go:117] "RemoveContainer" containerID="b443225e566a93e8c306590d927807a3e33c92bc67cb364bad759114c31962b4" Apr 21 10:19:47.822064 kubelet[3556]: I0421 10:19:47.821772 3556 scope.go:117] "RemoveContainer" containerID="5b877d81c908ec696dca9a25657035dbefe26ad757fa33f1b7b6e17bf8eb8481" Apr 21 10:19:47.971009 containerd[2109]: time="2026-04-21T10:19:47.970948383Z" level=info msg="CreateContainer within sandbox \"d772af21f4eac206c5bd3aefc64e271c9122c171dfc541083643bdaf5af1bb90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 21 10:19:47.971310 containerd[2109]: time="2026-04-21T10:19:47.970946341Z" level=info msg="CreateContainer within sandbox \"3cff99a600f5bb0c6999afcc04e4d08d01cff54dd2ef862331680bdaa73500d5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 21 10:19:48.143993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527225705.mount: Deactivated successfully. Apr 21 10:19:48.144223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2960645675.mount: Deactivated successfully. Apr 21 10:19:48.189362 containerd[2109]: time="2026-04-21T10:19:48.189305376Z" level=info msg="CreateContainer within sandbox \"3cff99a600f5bb0c6999afcc04e4d08d01cff54dd2ef862331680bdaa73500d5\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"06405171f1e475c5c53a1c35e39f08242ab243b5f18ddd4050ed25aced766bf6\"" Apr 21 10:19:48.192795 containerd[2109]: time="2026-04-21T10:19:48.192570258Z" level=info msg="CreateContainer within sandbox \"d772af21f4eac206c5bd3aefc64e271c9122c171dfc541083643bdaf5af1bb90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"9ee88658d874c18643d56025f2c40c0fd4cc329a7d969ff4efebf42553a5e785\"" Apr 21 10:19:48.195831 containerd[2109]: time="2026-04-21T10:19:48.195782469Z" level=info msg="StartContainer for \"06405171f1e475c5c53a1c35e39f08242ab243b5f18ddd4050ed25aced766bf6\"" Apr 21 10:19:48.211604 containerd[2109]: time="2026-04-21T10:19:48.211548985Z" level=info msg="StartContainer for \"9ee88658d874c18643d56025f2c40c0fd4cc329a7d969ff4efebf42553a5e785\"" Apr 21 10:19:48.407347 containerd[2109]: time="2026-04-21T10:19:48.406963042Z" level=info msg="StartContainer for \"9ee88658d874c18643d56025f2c40c0fd4cc329a7d969ff4efebf42553a5e785\" returns successfully" Apr 21 10:19:48.411983 containerd[2109]: time="2026-04-21T10:19:48.411932495Z" level=info msg="StartContainer for \"06405171f1e475c5c53a1c35e39f08242ab243b5f18ddd4050ed25aced766bf6\" returns successfully"